content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Source Data Types for MySQL The following table shows the MySQL database source data types that are supported when using AWS DMS and the default mapping from AWS DMS data types. Note The UTF-8 4-byte character set (utf8mb4) is not supported and could cause unexpected behavior in a source database. Plan to convert any data using the UTF-8 4-byte character set before migrating. For additional information about AWS DMS data types, see Data Types for AWS Database Migration Service. Note If the DATETIME and TIMESTAMP data types are specified with a "zero" value (that is, 0000-00-00), you need to make sure that the target database in the replication task supports "zero" values for the DATETIME and TIMESTAMP data types. Otherwise, they are recorded as null on the target. The following MySQL data types are supported in full load only:
http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Reference.Source.MySQL.DataTypes.html
2017-10-17T02:06:12
CC-MAIN-2017-43
1508187820556.7
[]
docs.aws.amazon.com
Navigate to the Poll Manager. To add a new Poll, click the New icon in the toolbar. To edit an existing Poll, click the Poll's Name or check the Poll's checkbox and press the Edit icon in the toolbar.. At the top right you will see the toolbar: The functions are: None known at this time. For the poll to show in the page, you need to add a poll module using the module manager:
https://docs.joomla.org/index.php?title=Help15:Screen.polls.edit.15&oldid=8332
2015-03-27T00:20:22
CC-MAIN-2015-14
1427131293283.10
[]
docs.joomla.org
Looking for a 1.0 doc? Click this link! Record your thoughts on the go and quickly convert them to action items when you're ready. A must for GTD fanatics! How to Convert Notes to Tasks 3. Select a Folder/List to insert the new task into or load in a template[doc]! How the Task is Formatted - The note title will be used as the new task's title - The note's contents will be inserted into the task's description - You'll choose what other details to include with the task before saving Read more about the the Chrome Extension here.
https://docs.clickup.com/en/articles/2635965-convert-notes-to-tasks
2019-12-06T02:20:02
CC-MAIN-2019-51
1575540482954.0
[]
docs.clickup.com
OptimalTrees Documentation OptimalTrees contains learners for training optimal decision trees for classification, regression, survival, and prescription problems. This documentation includes: - quick start guides for each of the problem types that contain a demo of OptimalTrees in action: - details of the various optimal tree learners, including descriptions of the available parameters - recommended strategies for parameter tuning and selection - tips and tricks for getting the best results from OptimalTrees - the OptimalTrees API reference Citing OptimalTrees If you use Optimal Trees in your work, we kindly ask that you cite the Interpretable AI software modules. We also ask that you reference the original work that first introduced the relevant algorithm: Optimal Classification Trees: @article{bertsimas2017optimal, title={Optimal classification trees}, author={Bertsimas, Dimitris and Dunn, Jack}, journal={Machine Learning}, volume={106}, number={7}, pages={1039--1082}, year={2017}, publisher={Springer} } Optimal Regression Trees: @book{bertsimas2019machine, title={Machine learning under a modern optimization lens}, author={Bertsimas, Dimitris and Dunn, Jack}, year={2019}, publisher={Dynamic Ideas LLC} } Optimal Prescriptive Trees: @article{bertsimas2019optimal, title={Optimal prescriptive trees}, author={Bertsimas, Dimitris and Dunn, Jack and Mundru, Nishanth}, journal={INFORMS Journal on Optimization}, pages={ijoo--2018}, year={2019}, publisher={INFORMS} }
https://docs.interpretable.ai/OptimalTrees/stable/
2019-12-06T01:02:21
CC-MAIN-2019-51
1575540482954.0
[]
docs.interpretable.ai
It's been a long-time goal of ours to put you in control of what's important to you. Why should you have to receive email and push notifications about things that don't matter to you? We're happy to introduce to you Notifications v2... Granular Notifications We're the only project management platform to give you this much control! You can now choose what triggers notifications and on which platform(s): mobile, email, web, browser. You're in complete control. Smart Notifications - ClickUp knows when you're active in the app and we can hold your notifications while your using ClickUp for a custom period of time. Then, if you clear any notifications, we won't send them to you externally. Browser Notifications - Now you can choose to receive Chrome or Firefox browser notifications. Reminders - Before overdue reminder: Get reminded before your task is due! You can choose to be reminded from 5 minutes up to 3 days before your task is due! - After overdue reminder: Your safety net! You can be notified up from 5 minutes up to 3 days after your unfinished task goes overdue. Daily Digest Email - Receive a daily email with tasks due today, tomorrow, and all overdue tasks in rich detail. Spaces Navigation: Pin to the top - Click the pinicon to pin your Spaces bar to the top of ClickUp so you can see all of your Spaces - all the time. Improvements Attachment Thumbnails are prettier 💁 - Detailed thumbnails based on filetype - Cloud storage file thumbnails - Cloud storage folder thumbnails Time Estimates - Total time estimated for a List now decreases as tasks within the List are closed
https://docs.clickup.com/en/articles/2026416-release-1-39
2019-12-06T02:19:29
CC-MAIN-2019-51
1575540482954.0
[array(['https://downloads.intercomcdn.com/i/o/63811464/7f0331a7cbc8e1734793c508/notifications-granular.gif', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/63805258/d24eb7b3e8c8619594dfe208/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/63811826/65118f6f3314965258c653a2/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/63833687/c2d842a02f985729d9224f5b/486e80cc-fc24-49c8-bf2b-26c4f7a6160f_large.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/63935242/f835df019e1c704673a71455/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/63935162/70a0ee889ab50b4d36222523/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/63935209/913efbca9d08ee5368838d71/image.png', None], dtype=object) ]
docs.clickup.com
Installation You can download the most recent version from our website. Afterwards, extract the archive to a place of your choice. There is no installation necessary. Open on Windows Simply open the executable review.exe application. File Association on Windows Add review to your start applications and get a local test report. Right click on it and go to Open with and search for the review.exe. Now all future test reports should be easy to open via review just by clicking on them. Open on Mac and Linux Run the executable review file and you are good to go. Login / Connect to rehub In order to use review, you are required to login with your retest account. Upon opening review you are prompted to login and your browser should open, redirecting you to rehub. Updating review To use recheck reports with review, they are required to have the same minor version (e.g. 1.6.x), otherwise review might not be able to work correctly anymore. To update review you have to download the new version and replace the folder contents.
https://docs.retest.de/review/installation/
2019-12-06T00:59:14
CC-MAIN-2019-51
1575540482954.0
[]
docs.retest.de
Breaking: #82629 - Removed tce_db options “prErr” and “uPT”¶ See Issue #82629 Description¶ The two options prErr (“print errors”) and uPT (“update page tree”), usually set via GET/POST when calling TYPO3’s Backend endpoint tce_db (DataHandler actions within the TYPO3 Backend), have been removed, and are now automatically evaluated when the endpoint is called. The option prErr added possible errors to the Message Queue. The option uPT triggered an update of the pagetree after a page-related action was made. Both options are dropped as the functionality is enabled by default. The corresponding methods have been adjusted: TYPO3\CMS\Core\DataHandling\DataHandler->printLogErrorMessages()does not need a method argument anymore. - The public property TYPO3\CMS\Backend\Controller\SimpleDataHandlerController->prErris removed - The public property TYPO3\CMS\Backend\Controller\SimpleDataHandlerController->uPTis removed Affected Installations¶ Installations with third-party extensions accessing the entrypoint tce_db or calling DataHandler->printLogErrorMessages() via PHP.
https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/9.0/Breaking-82629-TceDbOptionsPrErrAndUPTRemoved.html
2019-12-06T01:16:13
CC-MAIN-2019-51
1575540482954.0
[]
docs.typo3.org
Customer Aging Report: Header Area Use the Header area to configure date ranges for the Customer Aging report. Header Area Fields After checking or modifying header details, you can generate the customer aging report based on the specified information and default grouping and summarizing methods. More information Customer Aging Report: Default Summarizing
https://docs.codejig.com/en/entity2305843015656313960/view/4611686018427397679
2021-07-24T07:57:32
CC-MAIN-2021-31
1627046150134.86
[]
docs.codejig.com
I have been using EdgeRouter X for about five months now and it is amazing! It has awesome features at an affordable price. Win win situation in my book. Truth to be told, it is not for average consumer. I am not saying it is difficult to be configured but to benefit fully from it you need to know linux and networking. For example, you have Port Forwarding and NAT. You can easily configure port forwarding via "Port Forwarding" but most features you get is if you configure it via "NAT" if you know what you are doing there. I hope I will have time to prepare another page just for that. It is an interesting journey. As with all products, it is impossible to have them all so I must mention you cannot use jumbo frames with this one (aka MTU of 9000). If you attempt to set it up as 9000, the following error will be shown: The max allowed MTU on this platform is 2018. For me is not a problem but for some of you could be so take that in consideration. In short, what I like about it is: 1. Linux based (EdgeOS) 2. Easy to use web interface with many options 3. Ssh connectivity to the operating system and not just a set of commands but a full access to the operating system. You will have to enable this via web management interface. fmbp16:Desktop florian$ ssh zero Welcome to EdgeOS By logging in, accessing, or using the Ubiquiti product, you acknowledge that you have read and understood the Ubiquiti License Agreement (available in the Web UI at, by default,) and agree to be bound by its terms. Linux zero 3.10.107-UBNT #1 SMP Fri Feb 21 10:42:32 UTC 2020 mips Welcome to EdgeOS Last login: Wed Feb 3 21:57:38 2021 from 192.168.10.10 zero@zero:~$ sudo su - root@zero:~# uptime 21:58:30 up 169 days, 47 min, 1 user, load average: 1.16, 1.21, 1.22 root@zero:~# id uid=0(root) gid=0(root) groups=0(root) Having console access can bring you a lot of nice information. For example, the CPU is MIPS 1004Kc. For more information about what it can offer, please visit the presentation page: Before moving forward, I have prepared also a useful tutorial on youtube regarding 7 things I believe you must know how to do: 1. 0:28 - How to connect to it first time and configuration 2. 1:17 - How to change the default password and create/delete users 3. 2:19 - How to set up passwordless connection via SSH 4. 3:16 - Hardware offloading () 5. 4:26 - Firmware upgrade 6. 5:30 - Backup configuration 7. 5:51 - Restore configuration And if you like written tutorials, following I would like to show you a few command line tricks that you may need. How to enable hardware offloading: I have wrote an article about hardware offloading here. On this device is the same principle: offloading is used to execute functions of the router using the hardware directly, instead of a process of software functions. Better to check on the support website before deciding if you want this enabled or not:. I have enabled it. Commands to be executed: configure set system offload hwnat enable set system offload ipsec enable commit ; save You will have to reboot in order for changes to take effect. Output of an execution: zero@zero:~$ configure [edit] zero@zero# set system offload hwnat enable [edit] zero@zero# set system offload ipsec enable [edit] zero@zero# commit ; save [ system offload ipsec enable ] This change will take effect when the system is rebooted. WARNING : IPsec offload on ER-X platform is causing problems to L2TP remote-access VPN. and IPV6 site-to-site IPSec VPN You should *not* enable IPsec offload if you are using any of above. Other VPN modes are not affected by this issue: * IPv4 site-to-site IPsec VPN is working correctly with IPsec offload. * PPTP VPN is working correctly with IPsec offload. Only ER-X/ER-X-SFP/EP-R6 models are affected by this issue. This issue is to be fixed in future release. Saving configuration to '/config/config.boot'... Done [edit] zero@zero# Update the boot loader after firmware upgrade: In short, bootloader controls some functions like LED boot behavior, configuration/driver loading and so on and this on most EdgeRouter models is not updater automatically and it must be done manually. More information you can get here: Command to execute: add system boot-image Example: fmbpro:~ florian$ ssh [email protected] Welcome to EdgeOS By logging in, accessing, or using the Ubiquiti product, you acknowledge that you have read and understood the Ubiquiti License Agreement (available in the Web UI at, by default,) and agree to be bound by its terms. [email protected]'s password: Boot image can be upgraded to version [ e50_002_4c817 ]. Run "add system boot-image" to upgrade boot image. zero@zero:~$ zero@zero:~$ zero@zero:~$ add system boot-image Uboot version [e50_001_1e49c] is about to be replaced Warning: Don't turn off the power or reboot during the upgrade! Are you sure you want to replace old version? (Yes/No) [Yes]: Yes Preparing to upgrade...Done Copying upgrade boot image...Done Checking boot version: Current is e50_001_1e49c; new is e50_002_4c817 ...Done Checking upgrade image...Done Writing image...Done Upgrade boot completed zero@zero:~$ How to add ssh key on your EdgeRouter (min 2:19 in the video above): Commands to be executed: NOTE: replace KEY with the actual key you want to add :) configure set system login user zero authentication public-keys mbp type ssh-rsa set system login user zero authentication public-keys mbp key KEY commit save exit Execution example: NOTE: my key is scrambled :P zero@zero:~$ cd .ssh zero@zero:~/.ssh$ ls -la total 4 drwxr-x--- 2 zero users 232 Aug 18 19:27 . drwxr-xr-x 3 zero users 504 Aug 18 20:29 .. -rw-r--r-- 1 root root 90 Aug 18 19:27 authorized_keys zero@zero:~/.ssh$ cat authorized_keys # Automatically generated by Vyatta configuration # Do not edit, all changes will be lost zero@zero:~/.ssh$ configure [edit] zero@zero# set system login user zero authentication public-keys mbp type ssh-rsa [edit] zero@zero# set system login user zero authentication public-keys mbp key ABunchOfCharactersThatIsTheSSHKey [edit] zero@zero# commit [edit] zero@zero# save Saving configuration to '/config/config.boot'... Done [edit] zero@zero# exit exit zero@zero:~/.ssh$ cat authorized_keys # Automatically generated by Vyatta configuration # Do not edit, all changes will be lost ssh-rsa ABunchOfCharactersThatIsTheSSHKey mbp zero@zero:~/.ssh$
https://docs.gz.ro/edgerouter-x-thoughtsa-and-tips.html
2021-07-24T08:46:49
CC-MAIN-2021-31
1627046150134.86
[array(['https://docs.gz.ro/sites/default/files/styles/thumbnail/public/pictures/picture-1-1324065756.jpg?itok=rS4jtWxd', "root's picture root's picture"], dtype=object) ]
docs.gz.ro
How to sell music on Sellfy In this article: Can I sell my music on Sellfy? Sell audio files on SoundCloud Sell audio files on other platforms Can I sell my music on Sellfy? Yes, absolutely. Sellfy supports any type of downloadable format, including audio files. So, if you're a music creator and want to start selling your tracks, here's a list of ways to do so using Sellfy: - Build a Sellfy-based storefront. Click here for information. - Embed Sellfy on your existing website. To find out more, click here. - Embed Sellfy on your Soundcloud account. Head to the next section for instructions. - Any other platforms that allow product page links Sell audio files on Soundcloud If you already have a SoundCloud profile with followers and fans, selling your music directly on SoundCloud makes sense. SoundCloud allows you to insert the product page URL from your Sellfy account into your SoundCloud account. To be able to sell your tracks on SoundCloud using Sellfy, you first need to create a Sellfy account. You can sign up for a 14-day free trial to test out the platform first. However, you need to upgrade to one of our subscription plans to activate the checkout option and enable your fans to purchase your tracks. Once you've created a Sellfy account, you can start uploading the tracks you want to sell to your account. Your audio files must be uploaded to your Sellfy account to create a product page URL that you can then insert on your SoundCloud profile. After you uploaded the file(s), you'll be able to find it in your Products list and simply copy-paste the product page URL in the relevant section on your SoundCloud account. Note: While you can upload files and try out Sellfy's features during the trial period, the checkout for your products (audio files) is not yet enabled. To activate the checkout option and make your tracks available for purchase, you need to select a subscription plan from our pricing page. To find your product page URL, please follow the steps below: - Log in to your Dashboard - Navigate to Products → Digital Products - From your list, select the relevant product - Click Share to the right of the product price - The action will prompt a pop-up window - Click Copy to the right of the link - Paste the URL in your SoundCould account Paste the URL in SoundCloud You need to paste the product page link retrieved from Sellfy into your SoundCloud profile as shown below. For more detailed instructions, please visit the article Adding a Buy link and Title published by SoundCloud Help Center. Sell audio files on other platforms Essentially, you can use the product page link from your Sellfy account to sell your music on any other site as long as the site allows for this option in their settings.
https://docs.sellfy.com/article/217-how-to-sell-music-on-sellfy
2021-07-24T08:35:29
CC-MAIN-2021-31
1627046150134.86
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5483228ae4b047e113e4ed63/images/5cebea7d2c7d3a32d2a4d4c6/file-s545HCmuxw.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5483228ae4b047e113e4ed63/images/5cebd7642c7d3a32d2a4d370/file-u4ykrQGbF1.png', None], dtype=object) ]
docs.sellfy.com
This page describes additional configuration that is supported by all the Vyne components including: Vyne is a spring boot application. All the spring boot and Vyne specific configuration parameters can be overridden in the following way By default Vyne does not require config server, all the configuration is passed via command-line. To enable config server simply add these command-line parameters: By default Vyne logs everything to standard output (console) Vyne supports exporting logs to ElasticSearch via Logstash. Here is the way to enable logstash server:
https://docs.vyne.co/deployment/advanced-configuration
2021-07-24T08:03:53
CC-MAIN-2021-31
1627046150134.86
[]
docs.vyne.co
Custom Events Use custom events to track user activities and key conversions from inside or outside of your app or website, tying them back to corresponding digital engagement campaigns. You can also trigger automation based on custom events. In this guide, we will show you how to implement custom event tracking both via our web and mobile SDKs and via your internal customer tracking systems, tying all digital engagement activities together regardless of source. What are Custom Events? Airship provides out-of-the-box analytics for many different kinds of events through our mobile and web SDKs. Many of these events are standard and therefore applicable to all apps and websites, e.g., opens or tag change events. Custom Events, as the name suggests, are customizable to suit the needs of your app or website. Setting up custom events in your app is easy. We provide ready-made templates for certain common types of events in our iOS, Android, and Web SDKs, and we also provide you the flexibility to set up your own events with just a few lines of code. Custom events can also have properties associated with them. Event properties are key/value pairs of data that can provide more detail and customization to the events you are tracking. Examples include a property for a product SKU on a purchase event, or a category on a viewed video event. Additionally, Real-Time Data Streaming supports streaming custom events to your business systems in real time. See our Data Streaming API Reference for details. Custom Events as Triggers When you set up an automation or journey, you can specify the custom events and custom event properties that will trigger the automation or journey. Server-side events cannot be used to trigger an In-App Automation. You can also personalize your automations and journeys with values from the custom event that triggers your message using HandlebarsAirship’s message personalization syntax using double curly braces, more commonly known as {{handlebars}}. Use handlebars to insert variables and conditional logic in messages and templates. . When a member of your audience triggers the custom event, they will receive a personalized message with values from that event. Custom event properties can contain objects and arrays of objects. Use dot notation to access nested properties — parent_property.child_property. For example, if your custom event has a user_name property, you can add {{user_name}} to your message, and anybody receiving the message would see their user_name in the message they receive.. Tracking Custom Events Event Templates We provide ready-made templates for our iOS, Android, and Web SDKs to get you started with a number of common account-, media-, and retail-related events. To browse the available templates, see Custom Event Templates. Sample code is provided to get you started quickly. Sample Custom Event Code Tracking custom events in your app is similar to adding an Airship segmentation tag, requiring just a few lines of code to run when you would like to record the action. Below are three simple examples of creating and tracking a custom event, with code samples for iOS, Android, and Web. let event = UACustomEvent(name: "event_name", value: 123.12) // Set custom event properties var propertyDictionary = Dictionary <String, Any>() propertyDictionary["boolean_property"] = true propertyDictionary["string_property"] = "string_value" propertyDictionary["number_property"] = 11 event.properties = propertyDictionary // Record the event in analytics event.track() // Create and name event UACustomEvent *event = [UACustomEvent eventWithName:@"consumed_content"]; // Set custom event properties NSMutableDictionary<NSString *, id> *propertyDictionary = [NSMutableDictionary dictionary]; [propertyDictionary setValue:@YES forKey:@"boolean_property"]; [propertyDictionary setValue:@"string_value" forKey:@"string_property"]; [propertyDictionary setValue:@11 forKey:@"number_property"]; event.properties = propertyDictionary; // Then record it [event track]; Android // Create and name an event CustomEvent event = new CustomEvent.Builder("consumed_content").create(); // Set custom event properties on the builder builder.addProperty("bool_property", true); builder.addProperty("string_property", "string_property_value"); builder.addProperty("int_property", 11); builder.addProperty("double_property", 11.0d); builder.addProperty("long_property", 11L); ArrayList<String> collection = new ArrayList<String>(); collection.add("string_array_value_1"); collection.add("string_array_value_2"); collection.add("string_array_value_3"); builder.addProperty("collection_property", JsonValue.wrapOpt(collection)); // Then record it event.track(); Web // Create and name an event var event = new sdk.CustomEvent("consumed_content") // Then record it event.track() [event track]; Android // Create and name a simple event - and with a value CustomEvent event = new CustomEvent.Builder("event_name") .setEventValue(123.12) .create(); // Record the event it UAirship.shared().getAnalytics().addEvent(event); Web // Create and name a simple event - and with a value var event = new sdk.CustomEvent('event_name', 123.12) // Record the event it event.track() Google Analytics Tracker If you are already using Google Analytics, we provide iOS and Android SDK extensions that proxy Google Analytics events as Airship custom events. To learn more, see iOS Custom Events, Android Custom Events, and our GA Tracker repos on Github: Server-Side Events Server-side events are sent through the Custom Events API. When you submit an event, you’ll provide the channel ID or named user ID of the user you want to associate the event with. You can use Custom Event Triggers to send automated messages to the channel ID or named user associated with each event. Attributing events to named users can help you better represent user actions and trigger automations for individual users without having to map channels to external IDs. Server-side events are involved in a significant number of Airship integrations, including Radar, a location platform for apps. Sample Custom Event [ { "occurred": "2016-05-02T02:31:22", "user": { "named_user_id": "hugh.manbeing" }, "body": { "name": "purchased", "value": 239.85, "transaction": "886f53d4-3e0f-46d7-930e-c2792dac6e0a", "interaction_id": "your.store/us/en_us/pd/shoe/pid-11046546/pgid-10978234", "interaction_type": "url", "properties": { "description": "Sneaker purchase", "brand": "Victory Sneakers", "colors": [ "red", "blue" ], "items": [ { "text": "New Line Sneakers", "price": "$ 79.95" }, { "text": "Old Line Sneakers", "price": "$ 79.95" }, { "text": "Blue Line Sneakers", "price": "$ 79.95" } ], "name": "Hugh Manbeing", "userLocation": { "state": "CO", "zip": "80202" } }, "session_id": "22404b07-3f8f-4e42-a4ff-a996c18fa9f1" } } ]. Custom Event Properties Set specific properties and assign a range of values that must be met in order to trigger automation rules. - API: Custom Event Selectors - Dashboard: Manage Events Use Cases We recommend starting by identifying the 3-5 most important actions that users perform in your app or site.. These events will appear in each message report, and in an aggregate app report, with information on whether each event occurred on an Airship delivery channel (Landing Page or Message Center), or in a custom location in your app or website. These reports display summary data, and a CSV export option will provide full data you can slice and dice as needed or import into business intelligence or analytics tools. See View Attributed Events for detail about the values in the Event Tracking Report. Push Attribution Because_0<< cannot Airship JavaScript Interface The Airship JavaScript interface runs Airship JavaScript interface in to your WebView to be able to send custom events. iOS The Airship JavaScript interface can be added to any WKWebView whose navigation delegate is an instance of UANativeBridge . Custom WebViews can also implement the UANativeBridgeExtensionDelegate and UANativeBridgeDelegate protocols to extend the JavaScript environment or respond to SDK-defined JavaScript commands. self.nativeBridge = [UANativeBridge nativeBridge]; self.nativeBridgeExtension = [[UAMessageCenterNativeBridgeExtension alloc] init]; self.nativeBridge.nativeBridgeExtensionDelegate = self.nativeBridgeExtension; self.nativeBridge.nativeBridgeDelegate = self; self.nativeBridge.forwardNavigationDelegate = self; self.nativeBridge = UANativeBridge() self.nativeBridgeExtension = UAMessageCenterNativeBridgeExtension() self.nativeBridge.nativeBridgeExtensionDelegate = self.nativeBridgeExtension self.nativeBridge.nativeBridgeDelegate = self self.nativeBridge.forwardNavigationDelegate = self Make sure to remove the delegate when the controller is being deallocated: - (void)dealloc { self.webview.navigationDelegate = nil; } deint { self.webView.navigationDelegate = nil } Optionally, enable UAirship.close() by having the controller implement the close method in the UANativeBridgeDelegate protocol: - (void)close { // Close the current window } func close() { // Close the current window } Allow List Rules An allow list rule needs to be added for any URL that is not hosted by Airship: [[UAirship shared].URLAllowList addEntry:@"https://*.yourdomain.com"]; UAirship.shared().urlAllowList.addEntry("https://*.yourdomain.com") Allow list rules can also be defined in AirshipConfig.plist: <key>URLAllowList</key> <array> <string>https://*.yourdomain.com</string> </array> See UAURLAllowList for more details on creating valid URL patterns. Android and Amazon The()) Allow List Rules Add an allow list rule for any URL that is not hosted by Airship: UAirship.shared().getUrlAllowList().addEntry("https://*.yourdomain.com"); Alternatively, define the allow list rule in airshipconfig.properties: urlAllowList = https://*.yourdomain.com See UrlAllowList for more details on creating valid url patterns. Categories
https://docs.airship.com/guides/messaging/user-guide/data/custom-events/
2021-07-24T07:13:00
CC-MAIN-2021-31
1627046150134.86
[array(['https://docs.airship.com/images/custom_events_flowchart.png', 'Custom Events Custom Events'], dtype=object) ]
docs.airship.com
Location Airship supports location updates. Airship exposes a very simple, high-level API for requesting location updates, through UALocation . AirshipLocation In Airship SDK 13.0, location services are part of an optional standalone framework, called AirshipLocation. In order to access Airship location services, first add a separate import statement in your code, as shown below. You can ignore the import statement if you use CocoaPods. import AirshipLocation @import AirshipLocation; To access the location instance, simply call the following method. UALocation *location = [UALocation shared]; let location = UALocation.shared() Enabling location and NSLocationAlwaysAndWhenInUseUsageDescription keys to your application’s info.plist. UALocation.shared().isLocationUpdatesEnabled = true [UALocation sharedLocation].locationUpdatesEnabled = YES; When location updates are enabled for the first time it will prompt the user for permission. This behavior can be changed by disabling autoRequestAuthorizationEnabled. Allow Background Location Once the app backgrounds, location updates will be suspended by default, then resumed again once the app is foregrounded. To allow location updates to continue in the background, enable backgroundLocationUpdatesAllowed: UALocation.shared().isBackgroundLocationUpdatesAllowed = true [UALocation shared].backgroundLocationUpdatesAllowed = YES; Categories
https://docs.airship.com/platform/ios/location/
2021-07-24T07:25:12
CC-MAIN-2021-31
1627046150134.86
[array(['https://docs.airship.com/images/usage-description-add.png', 'Location Location'], dtype=object) ]
docs.airship.com
An editing extension adds image effects and filters to Canva. Users can access these effects via the Effects panel and apply them to their images. This quick start guide demonstrates how to create an editing extension that inverts the colors of the user's image. Log in to the Developer Portal. Navigate to Your integrations. Click Create an app. Enter a name for the app in the App name field. Agree to the Canva Developer Terms. Click Create app. Select Editing. Select Input file types > JPG/PNG. Select Extension source > JavaScript file. At its most basic, an editing extension is a JavaScript file that Canva runs inside an iframe. Canva passes the user's image (and other metadata) into the iframe and the extension is responsible for returning an image back to Canva. For this quick start, copy the following code into a JavaScript file: // Get helper methods for working with imagesconst { imageHelpers } = window.canva;// Initialize the clientconst canva = window.canva.init();// The extension has loadedcanva.onReady(async (opts) => {// Convert the CanvaElement into a CanvaImageBlobconst image = await imageHelpers.fromElement(opts.element, "preview");// Convert the CanvaImageBlob into a HTMLCanvasElementconst canvas = await imageHelpers.toCanvas(image);// Render the user's image in the iframedocument.body.appendChild(canvas);// Get a 2D drawing contextconst context = canvas.getContext("2d");// Invert the colors of the user's imagecontext.filter = "invert(100%)";// Draw the inverted image into the HTMLCanvasElementcontext.drawImage(canvas, 0, 0, canvas.width, canvas.height);// Render the control panel. (This is always required, even if// the extension doesn't have rich controls.)canva.updateControlPanel([]);});// Canva has requested the extension to update the user's imagecanva.onImageUpdate(async (opts) => {// Get the updated imageconst img = await imageHelpers.toImageElement(opts.image);// Get the HTMLCanvasElement that contains the user's imageconst canvas = document.querySelector("canvas");// Get a 2D drawing contextconst context = canvas.getContext("2d");// Draw the updated image into the HTMLCanvasElementcontext.drawImage(img, 0, 0, canvas.width, canvas.height);});// Canva has requested the extension to save the user's imagecanva.onSaveRequest(async () => {// Get the HTMLCanvasElement that contains the user's imageconst canvas = document.querySelector("canvas");// Return the image to Canva as a CanvaImageBlobreturn await imageHelpers.fromCanvas("image/jpeg", canvas);}); Then upload the file to the extension's JavaScript file field. This code creates an editing extension that inverts the colors of the user's image: The comments explain the basic lifecycle of the extension. To learn how to create the extension from scratch, see Image processing. You don't have to repeatedly upload a JavaScript file to the Developer Portal while developing an editing extension. You can set up a Development URL to streamline the workflow. Select Preview > Editing. When the Canva editor opens, wait for the app to load. Click Connect. You only have to connect (install) an app when using it for the first time. On return visits, the app immediately loads.
https://docs.developer.canva.com/apps/extension-points/editing-extensions/quick-start
2021-07-24T07:13:45
CC-MAIN-2021-31
1627046150134.86
[]
docs.developer.canva.com
... Running Eureka server Running Vyne query server Running one or many Pipeline Runner Taxi schema for ingested data ... The Pipeline Orchestrator can be access by a REST API or through a basic UI at
https://docs.vyne.co/pipelines/pipeline-orchestrator
2021-07-24T07:16:38
CC-MAIN-2021-31
1627046150134.86
[]
docs.vyne.co
This example demonstrates how to write out the contents of a Spark DataFrame to a GeoJSON file. This is useful for viewing columns of type geometry in your RasterFrame in an external GIS software. In this example, we run through the steps of 1) acquiring imagery scenes from the EarthAI Catalog, 2) using RasterFrames to read imagery, and 3) writing a RasterFrame to GeoJSON. Import Libraries We will start by importing all of the Python libraries used in this example. from earthai.init import * import earthai.chipping.strategy from shapely.geometry import Polygon import pyspark.sql.functions as F import ipyleaflet import geopandas Query the EarthAI Catalog We read in a GeoJSON file containing U.S. state boundaries and filter the GeoDataFrame to the continental U.S. The chip reader requires that the geometries passed to it are of type Polygon, so we convert the MultiPolygon representing Virginia to a Polygon. We use the geometry column in the GeoDataFrame to query the EarthAI catalog for MODIS surface reflectance data from September 1, 2020 covering the continental United States. states_url ='' states_gdf = geopandas.read_file(states_url) states_gdf = states_gdf[~(states_gdf["name"].isin(['Hawaii', 'Alaska']))] # convert MultiPolygon to Polygon in Virginia va_polygons = list(states_gdf[states_gdf.name == 'Virginia'].iloc[0].geometry) states_gdf.loc[states_gdf.name == 'Virginia', ['geometry']] = va_polygons[1] cat = earth_ondemand.read_catalog(states_gdf.geometry, start_datetime='2020-09-01', end_datetime='2020-09-01', collections='mcd43a4') Read in MODIS Imagery We join the catalog back to the GeoDataFrame containing state boundaries in order to match the state boundary to the intersecting image scene. This step is critical for use of the chip reader since the chipping strategy needs the state boundary polygon. cat = geopandas.sjoin(cat, states_gdf, how='right').rename(columns={"geometry":"us_bounds"}) We use spark.read.chip to read only the imagery intersecting state boundaries. We read in the B01 (red) band. The feature-aligned grid strategy creates a grid across each state boundary polygon using the specified tile_dimensions, and returns the generated chips. To view all of the available bands for the MODIS collection, you can run earth_ondemand.item_assets('mcd43a4'). rf = spark.read.chip(cat, catalog_col_names=['B01'], geometry_col_name='us_bounds', chipping_strategy=earthai.chipping.strategy.FeatureAlignedGrid(256), tile_dimensions=(256,256)) Write a RasterFrame to GeoJSON We use Geopandas to write out our data as GeoJSON because GeoPandas has a built-in method to write out a GeoDataFrame as a GeoJSON file. The expected coordinate reference system (CRS) for GeoJSON objects is "EPSG:4326", so as a first step, we reproject our chip outlines to this CRS. rf = rf.select(F.col('B01').alias('red')) \ .withColumn('chip_outline_4326', st_reproject(rf_geometry('red'), rf_crs('red'), F.lit('EPSG:4326'))) Then, we select chip_outline_4326 and transform our RasterFrame to a Pandas DataFrame using toPandas(). chips_df = rf.select('chip_outline_4326').toPandas() Next, we transform our Pandas DataFrame, chips_df, to a GeoPandas GeoDataFrame, chips_gdf, and specify a geometry and crs column. chips_gdf = geopandas.GeoDataFrame(chips_df, geometry='chip_outline_4326', crs='EPSG:4326') Finally, we write our GeoDataFrame as a GeoJSON using the built-in to_file function and specifying "GeoJSON" as the driver. chips_gdf.to_file("chip_boundaries.json", driver="GeoJSON") chip_boundaries.json.
https://docs.astraea.earth/hc/en-us/articles/360051613051-Writing-from-a-Spark-DataFrame-to-a-GeoJSON-File
2021-07-24T07:28:10
CC-MAIN-2021-31
1627046150134.86
[]
docs.astraea.earth
Ball Tracking¶ Keeping track of where all the balls are at any given time is a big part of a pinball. There are four components that make up MPF’s ball tracking and management system: - The Ball Controller, which is a core MPF module that manages everything. - Individual Ball Devices (troughs, locks, etc.) which track how many balls they’re currently holding, request new balls, eject balls, etc. - The Playfields device which is a special type of ball device that tracks how many balls are loose on the playfield at any given time. - Individual Diverters which are integral in routing balls to devices that request them. These four components are active at all times—regardless of whether or not a game is in progress. In other words, if MPF is running, it’s tracking balls. Note that tracking the number of balls on a playfield is somewhat complex. See the How MPF tracks the number of balls on a playfield guide for important details about how this works in MPF.
https://docs.missionpinball.org/en/latest/game_logic/ball_tracking/
2021-07-24T08:37:23
CC-MAIN-2021-31
1627046150134.86
[]
docs.missionpinball.org
... This guide includes the following sections: Viewing Nearmap DSM Imagery - electing View → Zoom to Layer.electing View → Zoom to Layer. Classifying According to Z Value You can classify the layer according to Z value. - Double Double click the DSM layer in the Layers Panel to access the Layer Properties, then select the Style properties option: - Under Band rendering change the Render type to Singleband pseudocolor, change the Color to BrBG, pseudocolor and click Classify: - Click Apply, then OK: The elevation values are displayed on the Layers Panel.
https://docs.nearmap.com/pages/diffpagesbyversion.action?pageId=11206980&originalVersion=11&revisedVersion=19
2021-07-24T07:35:46
CC-MAIN-2021-31
1627046150134.86
[]
docs.nearmap.com
Category:Neuroinformatics Empty strings are not accepted. Neuroinformatics is the science of organization of neuroscience data, through the aplication of computational models and analytical tools. It stands at the intersection of neuroscience and information science and is currently being applied in three main directions: - the development of tools and databases for management and sharing of neuroscience data at all levels of analysis, - the development of tools for analyzing and modeling neuroscience data, - the development of computational models of the nervous system and neural processes. Experts These experts have registered specific competence on this subject: Software Pages in category "Neuroinformatics" This category contains only the following page.
https://docs.snic.se/wiki/Category:Neuroinformatics
2021-07-24T07:35:29
CC-MAIN-2021-31
1627046150134.86
[]
docs.snic.se
Fido The Fido project aims to provide web hosting and computation services for Swedish bioinformatic research, partly to facilitate bioinformatics, but primarily to promote and increase exposure for Swedish research by providing a safe, reliable and secure web portal interface. The focus is on security, modularity, scalability and ease of adding new functionality. The Fido platform is now open for public access. Contents Services The first publicly accessible Fido service was a BLAST service for sequences from the Datisca glomerata nodule transcriptome project, and is available here. Work has been initiated to add three more services to the system, in collaboration with three different research groups. Technologies Some technologies used in Fido include SLURM, Apache, MySQL, Python and Django. Work has been initiated to enable further scheduling integration, like for instance ARC for elastic offloading of large scale jobs to other resources. Hardware Fido currently is a minimal scalable cluster resource (one system server, one front end server, and two worker servers). This original system is in the process of being extended with more worker nodes, and installations at other sites are currently under way. User:Joel Hedlund (NSC) is the Fido project leader and development supervisor. Additionally, in April 2013 Jonas Hagberg was employed by BILS to do Fido development.
https://docs.snic.se/wiki/Fido
2021-07-24T07:45:02
CC-MAIN-2021-31
1627046150134.86
[]
docs.snic.se
Service Backups¶ Everything within the root of the mounted volume directory will be stored in an EmbassyOS backup. This includes the config (config.yaml) and properties (stats.yaml) files, as well as any other persisted data within the volume directory. For restoration purposes, it might be beneficial to ignore certain files or folders. For instance, ignore the shared/public folder that is mounted for dependencies that expose this feature as it causes data inconsistencies on restore. In this case, create a .backupignore file. This file contains a list of relative paths to the ignored files. Example¶ The btcpayserver wrapper demonstrates a good use of a backupignore template. Ultimately, /datadir/.backupignore gets populated with: /root/volumes/btcpayserver/start9/public /root/volumes/btcpayserver/start9/shared
https://docs.start9.com/contributing/services/backups.html
2021-07-24T06:50:42
CC-MAIN-2021-31
1627046150134.86
[]
docs.start9.com
Women's We11ness Week Spabreaks.com will be hosting the biggest massage event the world has ever seen by inviting over 1000 women to enjoy a massage at each of our venues around the world at 11am on 19 September 2017. To show your interest to be part of this please fill in the questions below. Please note that places are limited and it will be on a first come serve basis. The massages will be free of charge, but each spa will be asking for a minimum donation of £20 per massage, which will be given in full to Willow Foundation. Never submit passwords through Google Forms. This form was created inside of Spabreaks.com. Report Abuse Forms
https://www.docs.google.com/forms/d/e/1FAIpQLScLdVykEom7RDXoXFVinxc4VoTEkM87SRZkGcAZ5yipPn3pLQ/viewform
2021-07-24T08:08:59
CC-MAIN-2021-31
1627046150134.86
[]
www.docs.google.com
Liquidity mining/incentives was implemented in the Main Aave market via AIP-16, enabling incentives for both depositing and borrowing. Incentives are also currently implemented on the Polygon market, using the same implementation details as described on this page. Your user(s) should already have a position in the protocol. Depending on the market and incentives that are enabled, they should already have a deposit, borrow, or both, in one of the incentivised assets. To check if an asset is current incentivised, use the getAssetData() method, passing in the associated aToken or debtToken address of the incentivised asset. To get a list of associated tokens per asset, see also getReserveTokensAddresses() in the Protocol Data Provider. Call the getRewardsBalance() method, passing in the relevant token addresses (aTokens and/or debtTokens) as an array. Call the claimRewards() method, passing in the relevant token addresses (aTokens and/or debtTokens) as an array. The msg.sender must match the user's address that has accrued the rewards. A claimer must have been set via the setClaimer() method, enabling the caller to claim on the user's behalf. Call the claimRewardsOnBehalf() method, passing in the relevant token addresses (aTokens and/or debtTokens) as an array. The msg.sender must have been previously set via setClaimer(). For the main market, stkAAVE is rewarded and automatically accrues interest based on the Staking AAVE parameters once claimed. There is an associated 10 day cool down period to convert stkAAVE to AAVE, with a 2 day redeem period to do so. Ensure that this information is presented to the user and handled correctly. For the Polygon market, WMATIC is rewarded with no 'lock up' period. Ensure that you account for the difference between WMATIC and the native MATIC, specifically that WMATIC needs to be unwrapped to MATIC. This can be done by calling withdraw() on the WMATIC contract. APY = normalizedEmissionPerSecond * rewardTokenPriceInEth * SECONDS_PER_YEAR / normalizedTotalTokenSupply Note: normalizedEmissionPerSecond = emissionPerSecond/RWARD_TOKEN_DECIMALS Reward Token is AAVE for mainnet and Matic for polygon. normalizedTotalTokenSupply = tokenTotalSupplyNormalized * tokenPriceInEth,where token is the incentivized a/s/v token and token price is same as underlying asset's price. You can get price for reward token as well as reserve token from AavePriceOracle function claimRewards(address[] calldata assets, uint256 amount, address to) Claims the accrued rewards for the assets, accumulating any pending rewards. function claimRewardsOnBehalf(address[] calldata assets, uint256 amount, address user, address to) Claims the accrued rewards for the assets, accumulating any pending rewards, on behalf of user. The user must have previously called setClaimer(), setting the msg.sender as the approved claimer. function setClaimer(address user, address caller) Sets an authorised caller to claim all rewards on behalf of the user. This can only be set via Governance. getAssetData()is currently available only on ethereum markets. In case you are working on Polygon markets (main or mumbai), please read data from the public mapping assetsdirectly 😅 function getAssetData(address asset) Returns the asset index, the emissions per second (i.e. current rewards rate), and the last updated timestamp. function getClaimer(address user) Returns the authorised claimer of user's accrued rewards. See also setClaimer(). function getRewardsBalance(address[] assets, address user) Returns the total rewards of user for assets. function getUserAssetData(address user, address asset) Returns the data of user on a distribution from asset function getUserUnclaimedRewards(address user) Returns the unclaimed accumulated rewards of user for their last action in the protocol. function getDistributionEnd() Returns the end of the distribution period
https://docs.aave.com/developers/guides/liquidity-mining
2021-07-24T08:57:07
CC-MAIN-2021-31
1627046150134.86
[]
docs.aave.com
Issuing an asset Business networks are typically created to transact a specific asset type or to trade a generic asset within a specific context. In either case there is at least one asset issuer (for example a bank) who is responsible for creating assets onto the ledger. The user who has signing authority for the creation of the assets is the asset issuer. There may be multiple asset issuers in the business network issuing different instances of the same asset type or potentially different asset types. In some cases, such as trade finance, any party may be an asset issuer at some point.
https://docs.corda.net/docs/corda-os/4.8/introduction/assetissuer.html
2021-07-24T08:08:00
CC-MAIN-2021-31
1627046150134.86
[]
docs.corda.net
mender-connect is a daemon responsible for handling bidirectional (websocket) communication with the Mender server. The daemon is responsible for implementing a range of troubleshooting features to the device as well as several enhancement to the mender-client. Following is a complete reference of the configuration options for mender-connect along with the default values. The default configuration path is /etc/mender/mender-connect.conf. { "HttpsClient": { "Certificate": "", "Key": "", "SSLEngine": "" }, "ReconnectIntervalSeconds": 5, "ServerCertificate": "", "Servers": null, "ServerURL": "", "SkipVerify": false, "Limits": { "Enabled": true, "FileTransfer": { "Chroot": "/var/lib/mender/filetransfer", "OwnerGet": ["pi","root"], "GroupGet": ["games","users"], "OwnerPut": "root", "GroupPut": "pi", "MaxFileSize": 4, "FollowSymLinks": true, "AllowOverwrite": true, "RegularFilesOnly": true, "PreserveOwner": true, "PreserveGroup": true, "PreserveMode": true, "Counters": { "MaxBytesTxPerHour": 1048576, "MaxBytesRxPerHour": 1048576 } } }, "FileTransfer": { "Disable": false }, "MenderClient": { "Disable": false }, "PortForward": { "Disable": false }, "ShellCommand": "/bin/sh", "ShellArguments": ["--login"], "Sessions": { "ExpireAfter": 0, "ExpireAfterIdle": 0, "MaxPerUser": 1, "StopExpired": false }, "Terminal": { "Disable": false, "Height": 40, "Width": 80 }, "User": "" } HttpsClient: Client TLS configuration. Certificate: Path to client certificate. Key: Path to client certificate private key. SSLEngine: OpenSSL cryptographic engine. ReconnectIntervalSeconds: Number of seconds to wait before reconnecting on connection errors. ServerCertificate: Path to a custom certificate trust store. mender-connect will automatically use the system-wide certificate store. Servers: Deprecated List of server URLs to connect with*. ServerURL: Deprecated Server URL to connect with*. SkipVerify: Skip TLS certificate verification. Servers and ServerURL are deprecated and unused since mender-connect version 1.0.0 - the values are automatically configured by the mender-client. There are certain features that you would want to keep under finer control than just enable/disable. File Transfer is one example; imagine you would like to restrict the transfers to a certain user or a group, or limit the average number of bytes that a device can transfer in an hour. The Limits section can be helpful here. Limits: Limits configuration options. Enabled: Enable limits control. FileTransfer: File Transfer limits configuration. The FileTransfer section in the Limits configuration block has the following options available: Chroot: limit the directory from which you can transfer files and to which you can upload them. OwnerGet: you can only transfer the files owned by the users on this list. GroupGet: you can only transfer the files that have a group from this list. OwnerPut: all the files you upload to a device will have this username set as an owner. GroupPut: all the files you upload to a device will have this group set. MaxFileSize: the maximal file size that you can download from or upload to a device. FollowSymLinks: if set to true, mender-connectwill resolve all the links in the target or destination path and the transfer will proceed. If false, and if any part of an upload or download path is a link, mender-connectwill refuse to carry out the request. AllowOverwrite: if set to true, mender-connectwill overwrite the target file path when processing the upload request. If set to false mender-connectwill refuse to overwrite the file. RegularFilesOnly: allow only the transfer of regular files. PreserveOwner: preserve the file owner from the upload request. PreserveGroup: preserve the file group from the upload request. Counters: Bytes transmitted/bytes received limits. MaxBytesTxPerHour: the maximal outgoing bytes that a device can transmit per hour. calculated as a moving exponential average. MaxBytesRxPerHour: the maximal incoming bytes that a device can receive per hour. calculated as a moving exponential average. The Mender Troubleshoot add-on package is required. See the Mender features page for an overview of all Mender plans and features. FileTransfer: File Transfer configuration options. Disable: Disable file transfer. MenderClient: Configuration for mender-client dbus API. Disable: Disable mender-client dbus hooks. The Mender Troubleshoot add-on package is required. See the Mender features page for an overview of all Mender plans and features. PortForward: Configuration for port forwarding Disable: Disable the port forwarding feature. The Mender Troubleshoot add-on package is required. See the Mender features page for an overview of all Mender plans and features. ShellCommand: Command executed initiating a new remote terminal session. ShellArguments: The command line arguments passed to the shell when spawned (defaults to Sessions: Configuration for remote terminal sessions. StopExpired: Terminate remote terminal sessions after ExpireAfter: Time in seconds until a remote terminal expires.* ExpireAfterIdleTime in seconds until a remote terminal expires after not receiving any traffic.* MaxPerUser: Maximum number of terminal sessions allowed per user. Terminal: Terminal configuration options. Disable: Disable the remote terminal feature. Height: Terminal height in number of characters. Width: Terminal width in number of characters. User: Login user for the remote terminal session. * ExpireAfter and ExpireAfterIdle are mutually exclusive configuration options, only one option can be configured at the time. By default, mender-connect runs as a systemd service. The easiest way to troubleshoot any issues related to mender-connect is by inspecting the service logs: journalctl -u mender-connect If you're having difficulty troubleshooting an issue, don't hesitate to ask our community on Mender Hub. Found errors? Think you can improve this documentation? Simply click the Edit link at the top of the page, and then the icon on Github to submit changes. © 2021 Northern.tech AS
https://docs.mender.io/development/add-ons/reference
2021-07-24T07:39:15
CC-MAIN-2021-31
1627046150134.86
[]
docs.mender.io
What is Azure Sentinel? frames.. . Connect to all your data To on-board Azure Sentinel, you first need to connect to your security sources. Azure Sentinel comes with a number of connectors for Microsoft solutions, available out of the box and providing real-time integration, including Microsoft 365 Defender (formerly Microsoft Threat Protection) solutions, and Microsoft 365 sources, including Office 365, Azure AD, Microsoft Defender for Identity (formerly as well. Note This service supports Azure Lighthouse, which lets service providers sign in to their own tenant to manage subscriptions and resource groups that customers have delegated. Workbooks After you connected your data sources to Azure Sentinel, you can monitor the data using the Azure Sentinel integration with Azure Monitor Workbooks, which provides versatility in creating custom workbooks. While Workbooks are displayed differently in Azure Sentinel, it may be useful for you to see how to Create interactive reports with Azure Monitor Workbooks. Azure Sentinel allows you to create custom workbooks across your data, and also comes with built-in workbook templates to allow you to quickly gain insights across your data as soon as you connect a data source. Analytics To help you reduce noise and minimize the number of alerts you have to review and investigate, Azure Sentinel uses analytics to correlate alerts into incidents. Incidents are groups of related alerts that together create an actionable possible-threat that you can investigate and resolve. Use the built-in correlation rules as-is, or use them as a starting point to build your own. Azure Sentinel also provides machine learning rules to map your network behavior and then look for anomalies across your resources. These analytics connect the dots, by combining low fidelity alerts about different entities into potential high-fidelity security incidents. Security automation & orchestration Automate your common tasks and simplify security orchestration with playbooks that integrate with Azure services as well as your existing tools. Built on the foundation of Azure Logic Apps, Azure Sentinel's automation and orchestration solution provides a highly-extensible architecture that enables scalable automation as new technologies and threats emerge. To build playbooks with Azure Logic Apps, you can choose from a growing gallery of built-in playbooks. These include 200+ connectors for services such as Azure functions. The connectors allow you to apply any custom logic in code, ServiceNow, Jira, Zendesk, HTTP requests, Microsoft Teams, Slack, Windows Defender ATP, and Cloud App Security. For example, if you use the ServiceNow ticketing system, you can use the tools provided to use Azure Logic Apps to automate your workflows and open a ticket in ServiceNow each time a particular event is detected. Investigation Currently in preview, Azure Sentinel deep investigation tools help you to understand the scope and find the root cause, of a potential security threat. You can choose an entity on the interactive graph to ask interesting questions for a specific entity, and drill down into that entity and its connections to get to the root cause of the threat. Hunting Use Azure Sentinel's powerful hunting search-and-query tools, based on the MITRE framework, which enable you create bookmarks for interesting events, enabling you to return to them later, share them with others, and group them with other correlating events to create a compelling incident for investigation. Community The Azure Sentinel community is a powerful resource for threat detection and automation. Our Azure Sentinel..
https://docs.microsoft.com/en-us/azure/sentinel/overview?WT.mc_id=modinfra-23498-thmaure
2021-07-24T07:41:57
CC-MAIN-2021-31
1627046150134.86
[array(['media/overview/core-capabilities.png', 'Azure Sentinel core capabilities'], dtype=object) array(['media/collect-data/collect-data-page.png', 'Data collectors'], dtype=object) array(['media/tutorial-monitor-data/access-workbooks.png', 'Dashboards'], dtype=object) array(['media/tutorial-investigate-cases/incident-severity.png', 'Incidents'], dtype=object) array(['media/tutorial-respond-threats-playbook/logic-app.png', 'Playbooks'], dtype=object) array(['media/tutorial-investigate-cases/map-timeline.png', 'Investigation'], dtype=object) array(['media/overview/hunting.png', 'Overview of hunting feature'], dtype=object) array(['media/overview/community.png', 'Explore the user community'], dtype=object) ]
docs.microsoft.com
Bar Charts are the most basic type of charts. They are used when you want to compare different categories. In order to create bar charts, you need 1 dimension and 1 measure. For example, if you want to see number of orders per project then you would use a bar chart, as they will help you compare which project needs most resources. The above figure is a bar chart with horizontal bars. The different bars will represent different projects and the height of the bars will be the value of the “count of total work orders“. Generally use this bar chart when you have a lot categories to look at. However, in case of fewer categories you could also use a column chart. This chart presents the data in a similar fashion. You can edit the colours by going to Edit → Series You can edit the X Axis Name by going to Edit → X The same with the Y Axis Name. You can edit it by going to Edit → Y When you add a second dimension to the above chart, you can create 3 other types of Bar Charts as shown below. In the above case now if you want to see orders by Projects and Suppliers. You can create: Grouped Bar Charts : The first dimension you choose forms the groups. Colours are used to represent the second dimension. The height of the bars will be based on the value of the measure you choose. In this case, the suppliers are represented by the colour and the projects forms the groups where the height of the bars is the value of the measure “Total Work Orders“. Stacked Bar Charts : The first dimension you choose forms the groups. Colours are used to represent the second dimension. In a Stacked Bar Chart, results are stacked on top of each other. The height of the bars by colour will be based on the value of the measure you choose. In this case you can see that the total height of the bars is the same as the work orders/project. Within a project, the number of orders that need to be fulfilled per supplier is represented by the different colours. Stacked Percentage Bar Charts : The height of all the bars are scaled to 100% to show the proportion of the second dimension in groups formed by first dimension. In this case, the chart shows each supplier (each representing a different colour) and displays the proportion of the orders that they will fulfil. e.g. “AB Supplier” will fulfil 63% of orders on the project “3 Weeds Hotel“ For more information, please see the video below.
https://docs.assignar.com/en/articles/3718913-insights-chapter-4-bar-charts
2021-07-24T06:42:16
CC-MAIN-2021-31
1627046150134.86
[array(['https://downloads.intercomcdn.com/i/o/185317288/6313d441e3abddbe0653acf2/97976873-d7ae-460c-96a7-e14875c19706.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/185317312/401f437a570ec5920f19ccf4/160501e9-553d-4286-ad76-5853e4e1054c.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/185317318/381e99f4705b4853bd5717f3/7b9d96df-446e-4751-92ac-2ead1ea4068e.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/185317347/421dec72de1888a3d554c0ed/6a2739a0-ec93-4e2e-8008-9c87df615538.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/185317364/c022a29cfb79cd5dd52941d4/340483c3-68be-45ca-9dde-9022c0935add.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/185317466/0367b0fc8e2fdf5fe8f1181f/9bfe54cb-405e-4e97-a1e0-3485ea8d6aa8.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/185317485/61a7b4c995d566442e7ce9e0/c8592dec-e8f9-4a52-bc05-70aa1d6e1f81.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/185317513/02b1cc1024ae699f53a1202c/023f876b-f783-48e8-a0d4-2500042e23c8.png', None], dtype=object) ]
docs.assignar.com
GroupDocs.Viewer Cloud 21.3 Release Notes This page contains release notes for GroupDocs.Viewer Cloud 21.3. Major Features Updated PdfOptions to accept an array of Permissions instead of a single value Full List of Issues Covering all Changes in this Release Public API and Backward Incompatible Changes Changed type of the option PdfViewOptions.Permissions from enum Permissions to List of strings, so it is possible to set multiple values that will be combined into document permission. Note: permissions applied only if PermissionsPassword property set.
https://docs.groupdocs.cloud/viewer/groupdocs-viewer-cloud-21-3-release-notes/
2021-07-24T08:35:04
CC-MAIN-2021-31
1627046150134.86
[]
docs.groupdocs.cloud
color_correction_profile:¶ Config file section The color_correction_profiles: section of your light_settings: is where you list your color correction profiles for your lights. Optional settings¶ The following sections are optional in the color_correction_profile: section of your config. (If you don’t include them, the default will be used). gamma:¶ Single value, type: number (will be converted to floating point). Default: 2.5 Specifies the gamma correction value for the lights. The default is 2.5. linear_cutoff:¶ Single value, type: number (will be converted to floating point). Default: 0.0 This is best explained by quoting the FadeCandy documentation: By default, brightness curves are entirely nonlinear. By setting linearCutoff to a nonzero value, though, a linear area may be defined at the bottom of the brightness curve. The linear section, near zero, avoids creating very low output values that will cause distracting flicker when dithered. This isn’t a problem when the lights are viewed indirectly such that the flicker is below the threshold of perception, but in cases where the flicker is a problem this linear section can eliminate it entirely at the cost of some dynamic range. To enable the linear section, set linearCutoff to some nonzero value. A good starting point is 1/256.0, corresponding to the lowest 8-bit PWM level. linear_slope:¶ Single value, type: number (will be converted to floating point). Default: 1.0 Specifies the slope (output / input) of the linear section of the brightness curve for the lights. The default is 1.0. whitepoint:¶ List of one (or more) values, each is a type: number (will be converted to floating point). Default: 1.0, 1.0, 1.0 Specifies the white point (or white balance) of your lights. Enter it as a list of three floating point values that correspond to the red, blue, and green light segments. These values are treated as multipliers to all incoming color commands. The default of 1.0, 1.0, 1.0 means that no white point adjustment is used. 1.0, 1.0, 0.8 would set the blue segment to be at 80% brightness while red and green are 100%, etc. You can use this to affect the overall brightness of lights (e.g. 0.8, 0.8, 0.8 would be 80% brightness as every color would be multiplied by 0.8). You can also use this to affect the “tint” (lowering the blue, for example).
https://docs.missionpinball.org/en/latest/config/color_correction_profile.html
2021-07-24T08:39:21
CC-MAIN-2021-31
1627046150134.86
[]
docs.missionpinball.org
iOS and Android handles images differently. iOS use 1x..3x format and Android uses xxhdpi, hdpi format and so on. To overcome those quirks and seamlessly handle image handling for both OS, Smartface has auto image generation feature embedded in the Cloud IDE. Create automated folder on /images directory if it doesn't exist Upload your highest resolution image to /images/automated folder.: You can also use CLI tool sfImageProcessor to accomplish image generation. Multiple image selection is not supported at the moment. For bulk image generation, you can use the script mentioned below or generate them one by one. Name of launch image must be either LaunchImage.png LaunchImage.ios.png LaunchImage.android.png If LaunchImage.png is being used, then launch images for both platforms are generated LaunchImage.ios.png or LaunchImage.android.png is being used then launch images for that specific platform are generated Developers must edit original launch image to make it square sized so that our image processor module can perform resizing & cropping accordingly The name of Launch Image is splash_image on Android. For Launch image, to handle different screen sizes better on Android, the developer should create a 9-patch image. The steps are explained below. It is recommended for your launch image to have at least 2208x2208 resolution Exact same rules apply for generating application icons Name of the app icon must be either AppIcon.png AppIcon.ios.png AppIcon.android.png It is recommended for your app icon to have 1024x1024 resolution
https://docs.smartface.io/smartface-cloud-development/cloud-ide/image-generation
2021-07-24T08:59:21
CC-MAIN-2021-31
1627046150134.86
[]
docs.smartface.io
On this page The Add node adds a key and a value (key-value pair) to a Map, thereby increasing the length of the Map. When adding a key-value pair to a Map, the node checks whether the added key is equal to an existing key in the Map. If the new key is equal to a key that's already in the Map, the existing value associated with the key will be overwritten with the new value. After this node has completed its operation, the key is guaranteed to be associated with its corresponding value until a subsequent mutation of the Map. Inputs Outputs Example Usage
https://docs.unrealengine.com/4.26/en-US/ProgrammingAndScripting/Blueprints/UserGuide/Maps/MapNodes/Add/
2021-07-24T08:59:19
CC-MAIN-2021-31
1627046150134.86
[array(['./../../../../../../../Images/ProgrammingAndScripting/Blueprints/UserGuide/Maps/MapNodes/Add/Map_AddNode.jpg', 'Map_AddNode.png'], dtype=object) array(['./../../../../../../../Images/ProgrammingAndScripting/Blueprints/UserGuide/Maps/MapNodes/Add/Map_AddUsage.jpg', 'Map_AddUsage.png'], dtype=object) ]
docs.unrealengine.com
Manage Events Configure events and event properties for segmentation and automation. Custom events can contain properties that you have defined outside of Airship. To make these events and their properties accessible to our segmentation and automation services, you may need to configure them on the Events configuration page, if you wish to: - Activate events and their properties for event segmentation. See Event Segmentation for details on event types and properties. - Add/Define properties for events you want to use for custom event triggersA trigger that initiates an automation or journey when a custom event associated with members of your audience occurs. or cancellation eventsEvents that prevent an automation from sending a message if a custom event occurs while the automation is in a delay period. . See also: Create New Event Go to Settings » Project Configuration and click Manage for Events. Click New event to add a new custom or predefined event. Enter a name for your new custom event or search for a predefined event to activate. Predefined events have reserved names and will appear in search results with a Predefined eventflag. (See Predefined Event Properties for a list of available predefined events.) If you choose one of the predefined events, the event properties will be populated for you. If you are creating a new custom event that is unknown to Airship, click Create custom “[search term]” event to create a new custom event. (Optional) Enter a description for the event. (Optional) Check the box to Activate for segmentation. (Optional) Add event properties. Predefined events are already populated with properties, and you can add more. You can also edit or remove properties for predefinded events. Enter a property and select its type. The type determines which operators are available to you when using event segmentation. Select Any if the value for the property is unknown or if it could be multiple types. Note If you select “Any” or “Array”, you cannot use those properties for event segmentation. Array properties are not available for event segmentation. (Optional) Click Add another to assign additional properties for the event. Click Save. Edit or Delete Events Deleting an event or event property makes it unavailable for use in segmentation. If you change an event name, the same is true for the previous event name. Deleting properties on this page will make it such that those properties are no longer available in automations or journeys to select, but the event itself remains available for triggers. Go to Settings » Project Configuration and click Manage for Events. The table includes a maximum of 1,000 events. Use Search to find your event. Click , make your changes, and click Save, or click to delete. Categories
https://docs.airship.com/guides/messaging/user-guide/project/config/events/
2021-07-24T06:37:08
CC-MAIN-2021-31
1627046150134.86
[]
docs.airship.com
This section describes the settings associated with the connection point (endpoint). Endpoint determines the connection address and the required security modes that OPC UA clients must use. Monokot OPC Server provides communication for OPC UA clients only using the UA TCP protocol (binary data transfer protocol). Default UA TCP endpoint is opc.tcp://localhost:43043/MonokotOPC - Port – specifies the port to connect to the UA TCP endpoint (by default 43043) - Channel Lifetime – specifies the number of milliseconds, after which the server frees up resources for the channel If you use firewall you must add the incoming connections rule for OPC UA clients The TCP UA endpoint allows you to encrypt and verify authenticity of transmitted data and provides the following security policies: - None – allows for transfer of data without encryption - Basic128Rsa15, Basic256, Basic256Sha256 – allows you to transfer encrypted data in different modes After changing and synchronizing the settings described above the OPC UA server will be automatically restarted A custom security certificate can be specified for UA TCP Endpoint. To do this, you need to import the certificate from a PFX file. To import a custom security certificate, open the OPC UA manager in Monokot Server Administrator and switch to the UA TCP Endpoint tab. Click the Import button and choose the PFX file. Enter password for the certificate (if no password is used, leave the field empty) and click OK. For the changes to take effect on the server, click Sync or press the F5 key. The UA TCP Endpoint tab also offers the following possibilities: - To reissue the security certificate - To reset the custom security certificate to the server’s default certificate - To export the certificate (public key) to a CRT file Monokot Server Administrator offers a tool for creating a self-signed PFX certificate. To launch it, choose Tools → Create Self-signed certificate… in the main window.
https://docs.monokot.io/hc/en-us/articles/360034746411-UA-TCP-Endpoint
2021-07-24T07:53:39
CC-MAIN-2021-31
1627046150134.86
[array(['/hc/article_attachments/360039986132/image-0.png', None], dtype=object) ]
docs.monokot.io
RadTextBoxControl vs RadTextBox This article will demonstrate the difference between RadTextBox and RadTextBoxControl. The main and most important difference between RadTextBox and RadTextBoxControl is that RadTextBox is a wrapper around the standard .NET TextBox control, while RadTextBoxControl is built entirely on top of Telerik Presentation Framework. This main difference helps avoid some of the limitations that hosted controls introduce such as unsupported clipping, higher memory usage, slower painting, etc. Follows a comparison table with the different features between these controls.
https://docs.telerik.com/devtools/winforms/controls/editors/textboxcontrol/radtextboxcontrol-vs-radtextbox
2021-07-24T09:05:30
CC-MAIN-2021-31
1627046150134.86
[]
docs.telerik.com
Create a message Use the Message composer to send a single message to any channel. To get started, click and select Message. After completing a step, click the next one in the header to move on. Click if you want to name the message or flag it as a test. Audience Enable the channels you want to send the message to, then choose a group of users. See: Selecting your audience. To make this audience eligible for retargeting, enable Generate retargeting segments. This option is only available when your only selected channels are app platforms and Web. Content Configure the message content per enabled channel. See: Creating content and Optional Features. Delivery Review Review the device preview and message summary. Click the arrows to page through the various previews. The channel and display type dynamically update in the dropdown menu above. You can also select a preview directly from the dropdown menu. . If you chose Upload Users in the Audience step, click Upload & Send and select your file. Uploaded merge field names will be verified against the merge fields set in the Content step. For more information,. . Click Send Message or Schedule Message. Categories
https://docs.airship.com/guides/messaging/user-guide/messages/create/
2021-07-24T08:44:15
CC-MAIN-2021-31
1627046150134.86
[array(['https://docs.airship.com/images/composer-progress.png', 'Create a message Create a message'], dtype=object) array(['https://docs.airship.com/images/generate-retargeting-segments.png', 'Create a message Create a message'], dtype=object) array(['https://docs.airship.com/images/composer-content-app.png', 'Create a message Create a message'], dtype=object) array(['https://docs.airship.com/images/message-delivery.png', 'Create a message Create a message'], dtype=object) array(['https://docs.airship.com/images/ab-preview.png', 'Create a message Create a message'], dtype=object)]
docs.airship.com
MOB cache properties Opening a MOB file places the corresponding HFile-formatted data in active memory. Too many open MOB files can cause a RegionServer to exceed the memory capacity and cause performance degradation. To minimize the possibility of this issue arising on a RegionServer, you must tune the MOB file reader cache to a size that HBase can scale. The MOB file reader cache is a least recently used (LRU) cache that keeps only the most recently used MOB files open. Refer to the MOB Cache Properties table for variables that can be tuned in the cache. MOB file reader cache configuration is specific to each RegionServer, so assess and change, if needed, each RegionServer individually. You must manually add any of the following properties you may require in the HBase Service Advanced Configuration Snippet (Safety Valve) for hbase-site.xml property. The following properties are available for tuning the HBase MOB cache.
https://docs.cloudera.com/runtime/7.2.10/configuring-hbase/topics/hbase-mob-cache-properties.html
2021-07-24T07:08:22
CC-MAIN-2021-31
1627046150134.86
[]
docs.cloudera.com
Generate Exchange Rates Report To generate the Exchange rates report: - On the Codejig ERP Main menu, click the Reports module. - Under the Reports tab, select Banking report and then Exchange rates. A listing page of the Exchange rates reports opens. 3. On the listing page of the report, click + Add new. A page for configuring report parameters appears. 4. In the header area, provide a date range and a name for the report. -
https://docs.codejig.com/en/entity2305843015656313960/view/4611686018427407170
2021-07-24T07:13:11
CC-MAIN-2021-31
1627046150134.86
[]
docs.codejig.com
Plugins¶ Starting from 0.9, Feed2tweet now supports plugins. Plugins offer optional features, not supported by default. Optional means you need a dedicated configuration and sometimes a dedicated external dependencies. What you need for each module is specified below. InfluxDB¶ The InfluxDB plugin allows to store already published tweets in a InfluxDB database. Install the InfluxDB plugin¶ To install Feed2tweet with the InfluxDB plugin, execute the following command. From scratch: # pip3 install feed2tweet[influxdb] Upgrading from a previous version, execute the followin command: # pip3 install feed2tweet[influxdb] --upgrade Configuration¶ Below is the block of configuration to add in your feed2tweet.ini: [influxdb] ;host=127.0.0.1 ;port=8086 user=influxuser pass=V3ryS3cr3t database=influxdb measurement=tweets - host: the host where the influxdb instance is. Defaults to 127.0.0.1 - port: the port where the influxdb instance is listening to. Defaults to 8086 - user: the user authorized to connect to the database. Mandatory (no default) - pass: the password needed to connect to the database. Mandatory (no default) - database: the name of the influxdb database to connect to. Mandatory (no default) - measurement: the measurement to store the value into. Mandatory (no default)
https://feed2tweet.readthedocs.io/en/latest/plugins.html
2021-07-24T07:43:52
CC-MAIN-2021-31
1627046150134.86
[]
feed2tweet.readthedocs.io
LightWare SF1X/SF02/LW20 Lidar LightWare develops a range of light-weight, general purpose, laser altimeters ("Lidar") suitable for mounting on UAVs. These are useful for applications including terrain following, precision hovering (e.g. for photography), warning of regulatory height limits, anti-collision sensing etc. Supported Models The following models are supported by PX4, and can be connected to either the I2C or Serial bus (the tables below indicates what bus can be used for each model). Available Discontinued The following models are no longer available from the manufacturer. I2C Setup Check the tables above to confirm that which models can be connected to the I2C port. Hardware Connect the Lidar the autopilot I2C port as shown below (in this case, for the Pixhawk 1). Some older revisions cannot be used with PX4. Specifically they may be miss-configured to have an I2C address equal to 0x55, which conflicts with rgbledmodule. On Linux systems you may be able to determine the address using i2cdetect. If the I2C address is equal to 0x66the sensor can be used with PX4. Parameter Setup Set the SENS_EN_SF1XX parameter to match the rangefinder model and then reboot. Serial Setup Hardware The lidar can be connected to any unused serial port (UART), e.g.: TELEM2, TELEM3, GPS2 etc. Parameter Setup Configure the serial port on which the lidar will run using SENS_SF0X_CFG. There is no need to set the baud rate for the port, as this is configured by the driver. If the configuration parameter is not available in QGroundControl then you may need to add the driver to the firmware. Then set the SENS_EN_SF0X parameter to match the rangefinder model and reboot. Further Information - Modules Reference: Distance Sensor (Driver) : sf1xx (PX4 Dev Guide)
http://docs.px4.io/master/zh/sensor/sfxx_lidar.html
2019-12-05T17:28:24
CC-MAIN-2019-51
1575540481281.1
[array(['../../assets/hardware/sensors/sf11c_120_m.jpg', 'LightWare SF11/C Lidar'], dtype=object) array(['../../assets/hardware/sensors/sf1xx_i2c.jpg', 'SF1XX LIDAR to I2C connection'], dtype=object)]
docs.px4.io
Request for Acknowledge Request for Acknowledge → Enable Request for Acknowledge → Disable Request for Acknowledge → Q&A Request for read acknowledge Full team alignment happens when everyone is on the same page. Inbox and chat tools don’t do anything to help us understand whether there’s someone who was left behind and missed some important thing. We built the Request for Read Acknoledge to let you know what part of your team is aligned and what part is missing out. How to Enable it You can enable the request for read confirmation before you publish an announcement when you’re in editing mode. By default the request for acknowledge confirmation is activated. You can disable it by clicking the blue toggle as shown below. When the Request for read acknowledge is enabled, Sametab shows a preview from the editor so that you know how it will look like once you publish the announcement. When you publish an announcement with the request confirmation enabled a few things happen on the user’s side - The announcements will remain on the recipients’ browser new tab until they click the mark as readbutton shown in the preview - The announcement will also remain visibile until they mark it as read in the Unreadsection of the Sametab Web App - A visible progress bar will tell you what’s the penetration of the announcement across your team - The author of the announcement will receive in-depth analytics about who marked it as read and when - When you allow the Request for Read Acknowledgeyou also are allowed to enable Email Nudges Team members will click the mark as read button to acknowledge your messase and to remove the announcement from their new tab and their Unread section. What if I don’t enable the Request for reack acknowledge This is what happens when you publish an announcement without enabling the Request for read acknowledge: - The announcements will still be visible on the recipients’ browser new tab but as soon as they click on it and read it form the web app, it will disaper from their new tab - The visible progress bar will shift from showing the acknowledge rate to the open rate - Sametab Managers will still be able to see analytics regarding who opened their message and when Q&A Can I remove the Request for reack acknowledgeafter I published an announcement? No. You can’t edit the options of an announcements after it has been published. Can I send an announcement to everyone and request the read acknowledge only for certain users? No. The Request for reack acknowledgeis enabled by default for all the recipients and you can’t refine this option at this time. Not using Sametab yet? Get your free account here.
https://docs.sametab.com/docs/features/acknowledge/
2019-12-05T18:31:40
CC-MAIN-2019-51
1575540481281.1
[array(['/docs/static/images/features/options-ack.png', 'write-new-sametab-announcement small-img'], dtype=object) array(['/docs/static/images/features/start-here.png', 'write-new-sametab-announcement small-img'], dtype=object)]
docs.sametab.com
Experimental:SocketIO DAT Summary[edit] See also: WebSocket DAT Contents Parameters - Connect Page Active active - When enabled, the SocketIO DAT is actively listening for events from the server, and can also emit events. URL url - The URL of the socket.io server. URL url - Enables TLS (transport layer security) certificate verification..
https://docs.derivative.ca/index.php?title=Experimental:SocketIO_DAT&direction=prev&oldid=16620&printable=yes
2019-12-05T18:30:32
CC-MAIN-2019-51
1575540481281.1
[]
docs.derivative.ca
The Business Central Administration Center The Business Central administration center provides a portal for administrators to perform administrative tasks for a Business Central tenant. Here, administrators can view and work with production and sandbox environments for the tenant, set up upgrade notifications, and view telemetry for events on the tenant. Accessing the administration center The following users are authorized to access the Business Central administration center: - Internal tenant administrators - Admin agent - Helpdesk agent Internal administrators are the system administrators, IT professionals, or superusers of the customer's company, who are assigned the Global admin role in the Office 365 admin center. For more information, see About admin roles in the Office 365 admin content. The admin agent and helpdesk agent roles are assigned through the Microsoft Partner Center for the partner that is associated with the tenant. These roles have access to the Business Central tenant as delegated administrators. Internal administrators As the internal administrator, you can get to the administration center by navigating directly to the URL, or by choosing the link in the Settings menu when you are logged in to your Business Central. To access the administration center from the URL, use the following pattern but replace [TENANT_ID] with the tenant ID of your Business Central:[TENANT_ID]/admin Tip The tenant ID is shown in the Help and Support page in your Business Central. Partner access to the administration center As a partner, you can access the administration center from the Partner Dashboard in the Microsoft Partner Center: - Log into the Partner Dashboard. - Select the Customers link in the navigation pane. - Select the customer tenant that you want to perform administrative tasks for. - Select Service Management. - Under the Administer Services heading, select Dynamics 365 Business Central. You can also get to the administration center by navigating directly to the URL of a tenant as described in the previous section. Note As the partner, there are certain tasks that you cannot do in your customers' Business Central. For more information, see Acting as a delegated administrator. See also Managing Environments Tenant Notifications Environment Telemetry Administration Center API Managing Technical Support Business Central Data Security Introduction to automation APIs Microsoft Partner Dashboard Add a new customer in the Partner Center Assign licenses to users in the Partner Center Create new subscriptions in the Partner Center Cloud Solution Provider program - selling in-demand cloud solutions Feedback
https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/administration/tenant-admin-center
2019-12-05T18:26:24
CC-MAIN-2019-51
1575540481281.1
[array(['../developer/media/admin/business_central_admin_center.png', 'Business Central Admin Center'], dtype=object) ]
docs.microsoft.com
Kiosk Mode¶ Kiosk ( or anonymous ) allows any user, not logged in, to create a volatile virtual machine. Once this machine is shut down, it is destroyed automatically. Setting¶ This kiosk mode must be defined for some bases in some networks. Note Unfortunately kiosk mode configuration has not been added to the frontend. Anyway it can be set only from within the database. Follow these steps carefully. Backup the Database¶ As we are going to change the database, any mistake can be fatal. Backup before. If you want to have the data handy do it right now: mysqldump -u root -p ravada domains > domains.sql mysqldump -u root -p ravada networks > networks.sql Define a Network¶ You can allow kiosk mode from any network, but you can define a new network where this mode is allowed. mysql -u root -p ravada mysql> insert into networks (name, address) values ('classroom','10.0.68.0/24'); Find the ids¶ You must find what is the id of the network and the virtual machine where kiosk mode is enabled. This domain must be a base and allowed public access. mysql> select id,name from domains where name='blablabla' and is_base=1 and is_public=1; +----+-------------------+ | id | name | +----+-------------------+ | 22 | blablabla | +----+-------------------+ mysql> select id,name from networks; +----+-----------+ | id | name | +----+-----------+ | 1 | localnet | | 4 | all | | 6 | classroom | +----+-----------+ Allow anonymous mode¶ mysql> insert into domains_network(id_domain, id_network,anonymous) VALUES(33, 6, 1); Access¶ Access now to the anonymous section in your ravada web server. You should see there the base of the virtual machine you allowed before.
https://ravada.readthedocs.io/en/latest/docs/Kiosk_mode.html
2020-02-16T21:51:21
CC-MAIN-2020-10
1581875141430.58
[]
ravada.readthedocs.io
Installing K10¶ K10 is available in two editions: Starter: The default Starter edition, provided at no charge and intended for evaluation or for use in small non-production clusters, is functionally the same as the Enterprise edition but limited from a support and scale perspective. Enterprise: Customers choosing to upgrade to the Enterprise edition can obtain a license key from Kasten or install from cloud marketplaces. The product page contains a comparison of the editions but there is no difference in the underlying software or install process. To install K10, please follow the instructions below: - Installing and Upgrading K10 - Container Storage Interface (CSI) Support - Air-Gapped Install - Production Deployment Checklist
https://docs.kasten.io/install/index.html
2020-02-16T22:23:23
CC-MAIN-2020-10
1581875141430.58
[]
docs.kasten.io
Flexible, NoSQL databases, which can handle large amounts of unstructured data and can flexibly increase or reduce storage capacity without loss, will gradually displace relational databases. MongoDB is on of the top picks among NoSQL databases in terms of fulfilling business requirements on fast and flexible access to the data in various spheres of development, especially where a live data prevails. Note, I'm not going to cover installing MongoDB and setting up a database in and of itself, as there is plenty of documentation already covering this. As mentioned above, it is assumed that you already have a MongoDB database running on Atlas. Under Clusters on MongoDB click Connect on the database in question. There are 3 different ways to connect to the database. If you haven't already you'll need to whitelist your own IP address and create a MongoDB user for this specific project. Choose the second option - Connect your application. Choose Python as your driver and 3.6 or later as your version. Copy the connection string to the clipboard. Open your bash profile in your preferred editor and enter the following: export 'MONGO_URI' = "YOUR_CONNECTION_STRING" replacing "YOUR_CONNECTION_STRING" with the connection string you just copied. Within the connection string you'll also have to replace with your login password for the current user. From the command line, run: source ~/.bash_profile Now, for this guide to be portable & runnable on any machine I first need to create a virtual environment. Check out our previous guide on installing the Anaconda distribution: We'll run the following commands to create a python3.7 environment. To create our virtual environment: conda create -n mongodb-playground python=3.7 -y and start it: conda activate mongodb-playground We'll also need the following libraries to connect to Mongo 4.0: conda install pymongo==3.8 dnspython ipykernel -y The next command ensures that our Jupyterlab instance connects to this virtual environment: python -m ipykernel install --user Later on, once we have pulled in our data, we are going to want to graph it. We usually recommend using plotly in python, given its relatively simple syntax and interactivity.. Big job apparently. As of September 2019 this will get you up and running: conda install -c conda-forge plotly=2.7.0 -yconda install -c conda-forge cufflinks-py -y Now we can spin up Jupyterlab with: jupyter lab On launch, you might be a prompt for a recommended build - the jupyterlab-plotly-extension install we ran above. Click Build and wait for it to be completed. Let's first import our required libraries: import os # to create an interface with our operating systemimport sys # information on how our code is interacting with the host systemimport pymongo # Python distribution for working with the MongoDB API Then we connect to our MongoDB client: client = pymongo.MongoClient(os.environ['MONGO_URI']) note that we can call 'MONGO_URI'because we've set the the connection string as an environment variable in our ~/.bash_profile . Now let's access a database, in this case, sample supplies. db = client.sample_supplies A collection is a group of documents stored in a MongoDB database, roughly the equivalent of a table in a relational database. Getting a collection works the same as accessing a database like we did above. In this case, our collection is called sales. collection = db.sales Let's test if we were successful by getting a single document. The following method returns a single document matching our query. test = collection.find_one() Pandas provides fast, flexible, and expressive data structures designed to make working with “relational” or “labeled” data both easy and intuitive, and is arguably the most powerful and flexible open source data analysis / manipulation tool available. We can convert our entire collection of data into a pandas DataFrame with a single command: data = pd.DataFrame(list(db.sales.find())) Some of our columns in the DataFrame are still formatted as dictionaries or PyMongo methods. For the purpose of this guide, let's look at the customer column - this is a dictionary containing 2 key-value pairs for each customer's age and gender. We need to split this column such that age and gender become their own columns. To this we can use the .apply(pd.Series) method to convert the dictionary into it's own DataFrame and concatenate it to our existing DataFrame, all in one line. df = pd.concat([data.drop(['customer'], axis=1), data['customer'].apply(pd.Series)], axis=1) We could do something similar with the items column, for example, but that's beyond the scope of this guide. Ok, let's start plotting our data to answer some basic questions. First, we'll import our plotting libraries. import plotly.plotly as pyimport plotly.graph_objs as goimport plotlyfrom plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplotimport cufflinks as cfcf.set_config_file(offline=True) Documentation for the two libraries are below: Define your business questions What business questions do you want to answer? We've laid out some example questions for our sample data: What is the average customer satisfaction rating by store - and is this affected by the method of purchase, whether it's carried out over the phone, in store or online. df.groupby(['storeLocation', 'purchaseMethod'], as_index=True)['satisfaction'].mean().unstack().iplot(kind='bar', mode='group', title='Average Customer Satisfaction by Store') 2. What about the number of purchase orders received, broken down by gender - are there any major differences? df.groupby(['gender', 'purchaseMethod'], as_index=True)['_id'].count().unstack().iplot(kind='bar', mode='group', title='Purchase Orders by Gender') 3. What is the age distribution of all of our customers? df['age'].iplot(kind='hist', color='rgb(12, 128, 128)', opacity=0.75, bargap = 0.20,title="Age Distribution of our Customers", xTitle='Age', yTitle='Count') Ok, so we've answered some basic questions about our business metrics. It's time to post this notebook to our team's Kyso workspace so everyone can learn from our insights and apply them to their respective roles. We can push to our organisation's Github repository from our local machine and sync with Kyso: Since we're already in Jupyterlab, we can install Kyso's publishing extension and post our notebook directly to Kyso from here:
https://docs.kyso.io/guides/graphing-data-from-mongodb
2020-02-16T21:57:01
CC-MAIN-2020-10
1581875141430.58
[]
docs.kyso.io
Official Release Date - November 26, 2019 Download - Build 4.30.00 New features V-Ray GPU - Added support for RT cores of NVIDIA RTX cards - Introduced light tree for improved sampling on scenes with thousands of lights - Support for Deep EXR output - Hash map based light cache - V-Ray GPU Distance texture support V-Ray - Hash map based light cache - Added support for creases in V-Ray subdivision - Support for Bercon Noise texture VFB - Improved Lens Effects with procedural dust and scratches VRayFastSSS2, VRayAlSurface - Added an option to consider all objects for SSS Modified features V-Ray - Improved alembic import times - Improved VRayScene importing times - Changed the default values of the Progressive Sampler - Added an option to save color corrections to raw .exr and .vrimg files - Added "vray gpuDeviceSelect" command - Added an API for custom c++ translators controlled by desc.txt - Removed the "Use legacy materials" option - The Hypershade Material Viewer will now use V-Ray GPU when GPU is the renderer - Now checking the version of the installed Chaos License Server during installation - Multi-threaded execution of OpenEXR compression and decompression to improve performance - Updated Embree to v3.2.0 - Updated OpenEXR to v2.3.0 - Optimized rendering of Multi/Sub-Object material - Set AlSurface SSS weight max value in the UI at 1 - Set the default dynamic memory limit to 0 - Transfer color VFB corrections for Chaos Cloud V-Ray GPU - Added the RTX value for the -rtEngine flag for V-Ray Standalone - Added support for efficiently instancing hair on GPU - The Gaussian AA filter now available when GPU is the renderer - "DR Bucket" render element for V-Ray GPU - Improved user-defined shaders (GLSL, MDL etc.) compilation - Optimized mesh transfers to multiple devices VFB - Moved the VFB test resolution functionality to the fly-out button instead of the right-click menu - Added "Save in image" option to OCIO color corrections, to save the corrected image - Added sliders for Lens effects' "Intensity" and "Threshold" parameters - Read the saved window position only for the initial render and use the last valid position afterwards - UI improvements for the VFB Lens Effects panel VRayVolumeGrid - Added support for explicit texture coordinates for the Texture sampler - Do not load caches while the timeline changes during sequence vrscene export or render in Maya - Added warning when importing VDB particle caches in Maya versions older than 2018 - Speed up loading of VDB caches by reading their min-max channel ranges from metadata instead of calculating them Chaos Cloud - Added support for smarter cloud client updates - Added Cloud client application to V-Ray for Maya installation V-Ray IPR - AI denoiser will be used for IPR by default VRayProxy - Improved errors logging Cryptomatte - Disabled cryptomatte in IPR; - Set the default Cryptomatte mode to "Node name with hierarchy" VRayRectLight - Improved sampling of directional lights VRayLightDome - Dome lights are now adaptive by default VRayALSurfaceMtl - Implemented bump shadowing VRayMtl - Enabled affect shadows by default VRayStereoscopic - Removed the Shade Map for Stereoscopy options VRayExtraTex Added an option to disable lossy DWAA/DWAB compression for a render element VRaySmplerInfo - The render element should always be saved with lossless compression Bug fixes V-Ray - Fixed compositing results don't match with matte reflections if "Consistent lighting elements" is enabled; - Fixed Matte reflections not rendering with consistent render elements - Fixed zDepth render element infinity color - Fixed deleting instances in IPR crashes Maya in some cases - Fixed Negatively scaled objects exported to proxy with flipped normals - Fixed wrong gamma of the colorCorrect lookdev kit node - Fixed Standalone VFB not auto resizing depending on resolution - V-Ray should not append "_tmp" to the filename when exporting for the cloud - Fixed V-Ray Standalone Error [createPathImpl] Empty string argument with -imgfile=<filename> - Fixed RawReflection RE for AlSurface rendering black if the Reflection RE is not present - Fixed random crash after rendering a sequence with V-Ray GPU in Maya - Fixed different motion blur between proxy and referenced scene - Fixed Material nodes defined in desc.txt wrongly exported as volume shaders - Fixed Test resolution disabling resumable rendering in batch mode - Fixed Maya fluid texture rendering non-deterministically with GI - Fixed overscan for rendered deep EXR images not exported corectly - Deleting displaced geometries crashes maya - Fixed EXR metadata camera name is <unknown> - Fixed float list in TexRemap '_value' parameters preventing GeomDisplacedMesh from functioning properly - Fixed missing metadata from multichannel EXR when rendering in batch mode - Fixed matte objects not present in the alpha channel when rendered through refractive objects V-Ray IPR - Fixed Isolate selecting a texture in IPR not respecting the placement - Fixed disconnecting image planes not detected in IPR - Fixed adding renderable curves to a spline does not refreshing while IPR is running - Fixed debug Shading's Isolate Selected mode not working correctly for objects with opacity VRayProxy - Fixed wrong UV interpolations in imported .abc hair/fur (via VRayProxy) when Tessellate hair is active - Fixed preview for instanced vrscenes - Fixed slow proxy export in Batch mode with specific scenes - Fixed Proxy visibility lists not loading in specific scenes - Fixed Proxy with -noMaterial option not rendered - Very slow loading of VRayMesh visibility lists V-Ray GPU - Fixed IPR on multi GPU stoping with CUDA_ERROR_INVALID_HANDLE - Fixed brighter rendering during adaptive light v2 gathering phase - Fixed artifacts when using Metalness with Glossy Fresnel - Fixed artifacts with VRayALSurface and Adaptive lights v2 - Fixed bounding box artifacts when rendering a VRayVolumeGrid - Fixed crash with hidden faces on subdivided geometry - Fixed crash during render with volumetrics - Fixed crash on stop during Light cache phase - Fixed crash when cancelling the render for scene with lights include/exclude lists - Fixed crash when using VRayClipper on an object with material containing VRayCurvature - Fixed defocusAmount denoise element not generated with standard cameras - Fixed Gaussian image filter not matching the CPU one - Fixed hidden edges of VRayEdgesTex always on with VRayProxy - Fixed hidden faces being rendered during Light cache preview, creating wrong lighting - Fixed Light cache not working with DOF and perspective camera - Fixed nested refractive volumes rendering wrong - Optimized distance texture for geometry heavy scenes - Fixed random crash with tiled bitmaps - Fixed unhandled exception when baking texture of a mesh with degenerate UVs - Fixed VRayVolumeGrid are not rendering correctly in normals render element V-Ray GPU/VRayAlSurface: Fixed VRayAlSurface with rounded edges rendering wrong on the GPU V-Ray GPU/VRayDomeLight: Fixed artifacts with Adaptive dome when objects are excluded from shadow casting in the light VRayDomeLight - Fixed artifacts when using Adaptive dome light and VRayFur with VRayMtl on it - Fixed artifacts with Adaptive dome and VRayToon - Fixed artifacts with Adaptive dome light with "affect reflections" disabled - Fixed using camera clipping planes makes the dome light invisible V-Ray GPU/VRayDomeLight: Fixed artifacts with Adaptive dome when objects are excluded from shadow casting in the light VFB - Fixed certain integer render elements are not displayed when loading EXR files - Fixed the scrollbar in the Color Corrections window hiding some of the text - Fixed UI not responsive with ICC color correction during IPR with V-Ray GPU Material importer - Fixed BRDFVRayMtl not imported from file as VRayMtl - Fixed SamplerInfo Extra V-Ray attributes not importing by the material importer - Material importer now remembers the correct path to open when browsing Python - Fixed fatal error with post translate print node settings - Fixed Python post translate not working for certain lists VRayHairNextMtl - Diffuse component should go in it's respective render elements - Fixed artifacts in raw render elements with Consistent lighting elements VRayToonMtl - Fixed incorrect toon mtl contribution in light cache VRayEnvironmentFog - Fixed Environment Fog not respecting sets in "Add to shape lights" light mode VRayCarPaintMtl - Fixed car paint material rendering non-deterministically VRayScene - Fixed crash when importing scene with many particles VRayLightRect - Fixed different specular reflections when rendering directional disc light VRayOSL - Crash in microfacet("ggx") when roughness is greater than 0.0 VRayOverrideMtl - Fixed vignetting along concave edges with Light cache and many lights ZDepthRE - Refractive objects are white regardless of the Affect channels value with V-Ray GPU VRayALSurfaceMtl - Fixed SSS not being computed for materials seen through glossy refraction VRayStochasticFlakesMtl Fixed stochastic flakes not visible through glossy refraction
https://docs.chaosgroup.com/display/VRAY4MAYA/V-Ray+Next%2C+Update+2
2020-02-16T22:12:22
CC-MAIN-2020-10
1581875141430.58
[]
docs.chaosgroup.com
Integrating Ravada and OpenGnsys¶ Opengnsys is a open source project for remote deployment. This is a project developed for many Spanish universities to provide a full tool to deploy, manage, clone and manage remote computers. Opengnsys allow distribute and install many different operating systems. Opengnsys is based in a PXE boot and a Linux graphical agent that allows manage remotely the computer from a centralized console. Here, we will explain how adapt our RAVA system to support boot from Opengnsys. The final objective is automate the creation a virtual machine with the same image that we have created for our classrooms. DHCP boot options¶ First of all, we have to provide the dhcp options next-server and filename. to our dhcp server. Ravada is a KVM-based solutions, so, the dhcp server is the standard integrated in KVM. The DHCP-KVM server allows some configurations. Edit the KVM network configuration and add these options to the dhcp section: virsh#virsh net-edit default <network> <name>default</name> <uuid>85909d3b-f219-4055-92a3-d36c0c57810c</uuid> <forward mode='nat'/> <bridge name='virbr0' stp='on' delay='0'/> <mac address='52:54:00:1a:06:50'/> <ip address='192.168.122.1' netmask='255.255.255.0'> <tftp root='/'/> <dhcp> <range start='192.168.122.30' end='192.168.122.254'/> <bootp file='grldr' server='<opengnsys-server-ip'/> </dhcp> </ip> </network> grldr is the standard boot loader for Opengnsys. Create and empty virtual machine¶ Now, you have to create and empty virtual machine. And empty machine boots from the iPXE network boot firmware client integrated in KVM. This is a snapshot of a vm booting process: NAT adaptation¶ Now, we have detected that TFTP doesn’t work with the default KVM NAT configuration. You have to add support for it. This document explain it: Create the virtual machine in the Opengnsys console¶ We have to create the support configuration to this virtual PC in the Opengnsys console. The virtual machine runs inside a NATed network, usually with a 192.168.122.0/24 IP address. Then, these vms uses the Ravada server as gateway. We have to create an new classroom with the NAT configuration to allow opengnsys to assign correctly the network mask and the gateway. This is the ravada-classroom configuration: - gateway: 192.168.122.1 (KVM NAT default gateway) - netmask: 255.255.255.0 (KVM NAT default netmask) - IP multicast: your multicast group - Menu: your page menu - Repository: your image repository Now, we have to create a computer inside your ravada classroom that is your virtual machine. Copy the MAC address of your empty machine: virsh net-dhcp-leases default Expiry Time MAC address Protocol IP address Hostname Client ID or DUID ------------------------------------------------------------------------------------------------------------------- 2018-11-27 09:11:39 52:54:00:a7:49:34 ipv4 192.168.122.178/24 - 01:52:54:00:a7:49:34 And now, re-generate the TFTPBOOT files: In this example, we have assigned the new PC to the ogAdmin group. Now, you can boot the empty machine: We have detected that the new machine boots, but it hangs just when the menu had to appear. After debugging, we have detected that the virtual machine don’t have access to the http server with the menus. This a problem with routing. We have resolved creating a fake computer with the IP and MAC address of the KVM external NAT: ifconfig br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.10.73.24 netmask 255.255.255.0 broadcast 10.10.73.255 inet6 fe80::20a:f7ff:feba:c980 prefixlen 64 scopeid 0x20<link> ether 00:0a:f7:ba:c9:80 txqueuelen 1000 (Ethernet) RX packets 11251336 bytes 196755808380 (196.7 GB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 11875794 bytes 4220061188 (4.2 GB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 - IP: external NAT address of your RAVADA system - MAC: external MAC address of your RAVADA system This is our standard menu: Now, you can boot your standard images in a virtual environment of Ravada. You have to be sure that your images have support to run in a virtualized system. In Linux images, the kernel have support /dev/vda devices. In Windows systems, you have to add the virtio drivers. Special script adaptation¶ Our images boots ok, but our opengnsys instance doesn’t detect the virtual disk. The problem was in our system, wich is very old (v1.0.5). To add support to detect /dev/vda devices, we have patched the /opt/opengnsys/client/lib/engine/bin/Disk.lib library: # Listar dispositivo para los discos duros (tipos: 3=hd, 8=sd 253=vda). inLab 2018 ALLDISKS=$(awk '($1==3 || $1==8 || $1==253) && $4!~/[0-9]/ {printf "/dev/%s ",$4}' /proc/partitions) VOLGROUPS=$(vgs -a --noheadings 2>/dev/null | awk '{printf "/dev/%s ",$1}') ALLDISKS="$ALLDISKS $VOLGROUPS" This patch adds vda disk detection to the ogDiskToDev function. (minor 253 -> vda devices). This problem was fixed in later versions.
https://ravada.readthedocs.io/en/latest/docs/OpenGnsys_iPXE_support.html
2020-02-16T21:51:26
CC-MAIN-2020-10
1581875141430.58
[array(['../_images/opengnsys_boot_pxe.png', '../_images/opengnsys_boot_pxe.png'], dtype=object) array(['../_images/opengnsys_ravada_classroom.png', '../_images/opengnsys_ravada_classroom.png'], dtype=object) array(['../_images/opengnsys_computer.png', '../_images/opengnsys_computer.png'], dtype=object) array(['../_images/opengnsys_generate_tftpboot.png', '../_images/opengnsys_generate_tftpboot.png'], dtype=object) array(['../_images/opengnsys_fake_computer.png', '../_images/opengnsys_fake_computer.png'], dtype=object) array(['../_images/opengnsys_menu.png', '../_images/opengnsys_menu.png'], dtype=object) ]
ravada.readthedocs.io
January 14th, 2009:. Table of Contents:. $("p").live("click", function(){ $(this).after("<p>Another paragraph!</p>"); }); <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" ""> <html> <head> <script src=""></script> <script> $(document).ready(function(){ $("p").live("click", function(){ $(this).after("<p>Another paragraph!</p>"); }); }); </script> <style> p { background:yellow; font-weight:bold; cursor:pointer; padding:5px; } p.over { background: #ccc; } span { color:red; } </style> </head> <body> <p>Click me!</p> <span></span> </body> </html> More information about live events can be found in the .live and .die:. Instead.).. The following are changes that were made that may have a remote possibility to cause backwards compatibility issues in your web pages. The following properties have been deprecated (in favor of feature detection and jQuery.support, as discussed in the Overview). The following browsers are no longer supported: A number of new features, performance improvements, and method changes have come in this release - all outlined below (grouped by module). New Features: New Features: Changes: Changes: .toggleClass( "className", state )- toggle a class based upon a boolean value. More information: toggleClass New Features: .closest( selector )- located the nearest ancestor element that matches the specified selector. Extremely useful for event delegation. More information: closest Changes: .is()can now handle complex selectors, for example: .is("div a"). More information: is New Features: Changes: $("<script/>")is now equivalent to $(document.createElement("script")). More information: jQuery New Features: New Features: .trigger()now bubbles events up the DOM tree by default. More information: trigger Changes: New Features: Changes: .toggle( boolean )- A boolean switch can now be used to easily toggle the display of an element. More information: toggle New Ajax settings: It allows users to sanitize untrusted responses and also permits alternate ways of parsing string responses like json or plain js. There's a manager plugin by Ariel Flesler that allows multiple filters to be held and used accordingly. This allows scripts like flXHR to cleanly integrate with jQuery. Same as with dataFilter, there's a small plugin by Ariel Flesler that works as a manager so that multiple implementations can co-exist. We updated the documentation for the Ajax settings. Changes: New Features: Changes: The jQuery test suite now has 1395 tests. We actively tested in Firefox 3, Firefox 3.5, Safari 3.2, Safari Nightly, Opera 9.6, IE 6, IE 7, and Chrome. A view of the final test run can be seen below: We also did test runs in IE 8 (beta2) and Opera 10 (alpha) - there are a couple minor bugs in both that we're reporting back to the respective browser vendors. 1.3.2 now passes in IE 8 RC 1.
http://docs.jquery.com/Release:jQuery_1.3
2009-07-03T16:59:27
crawl-002
crawl-002-013
[]
docs.jquery.com
This is the core dependency of the effects of jQuery UI. This file is needed by all other effects and can also be used stand-alone. Please note that ui.core.js is not a dependency for the effects to work. The core comes with the following exclusive functionality tbd Effects that can be used with Show/Hide/Toggle: Effects that can be only used stand-alone:
http://docs.jquery.com/UI/Effects
2009-07-03T17:01:02
crawl-002
crawl-002-013
[]
docs.jquery.com
works like Py_Initialize() if initsigs is 1. If initsigs is 0, it skips initialization registration of signal handlers, which might be useful when Python is embedded. New in version 2.4.). There is no return value; errors during finalization are ignored.() more than once._*() functions work). Simple things may work, but confusing behavior will always be near.() will destroy all sub-interpreters that haven’t been explicitly destroyed at that point. This function should be called before Py_Initialize() is called for the first time, if it is called at all. It tells the interpreter the value of the. Return the program name set with Py_SetProgramName(), or the default. The returned string points into static storage; the caller should not modify its value. value is available to Python code as the list sys.path, which may be modified to change the future search path for loaded modules. Return the version of this Python interpreter. This is a string that looks something like . Return a string representing the Subversion revision that this Python executable was built from. This number is a string because it may contain a trailing ‘M’ if Python was built from a mixed revision source tree. New in version 2.5.. Return the official copyright string for the current Python version, for example 'Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam' The returned string points into static storage; the caller should not modify its value. The value is available to Python code as sys.copyright.. Set sys.argv based. Before the addition of thread-local storage (TLS) the current thread state had to be manipulated explicitly. This is easy enough in most cases. Most code manipulating the global interpreter lock GIL GIL;. It is important to note that. Initialize and acquire the global interpreter lock. It should be called in the main thread before creating a second thread or engaging in any other thread operations such as PyEval_ReleaseLock() or. New in version 2.4. The following macros are normally used without a trailing semicolon; look for example usage in the Python source distribution. All of the following functions are only available when thread support is enabled at compile time, and must be called only when the global interpreter lock has been created.. Changed in version 2.3: Previously this could only be called when a current thread is active, and NULL meant that an exception was raised.. New in version 2.3. and Py_END_ALLOW_THREADS macros. New in version 2.3. The: Return a tuple of function call counts. There are constants defined for the positions within the tuple: PCALL_FAST_FUNCTION means no argument tuple needs to be created. PCALL_FASTER_FUNCTION means that the fast-path frame setup code is used. If there is a method call where the call can be optimized by changing the argument tuple and calling the function directly, it gets recorded twice. This function is only present if Python is compiled with CALL_PROFILE defined. These functions are only intended to be used by advanced debugging tools..
http://docs.python.org/c-api/init.html
2009-07-03T17:02:44
crawl-002
crawl-002-013
[]
docs.python.org
Real live migration case Here is how to upgrade a TurboGears 1.1 project to TurboGears 2.0 Outline These are the steps we'll follow - create a working environment of the tg1 project - install tg2 on that environment - upgrade the project structure - upgrade the templates - upgrade the model - upgrading the controller - run under tg2 - make it so that it will run on tg1 (maybe) Project Structure - spammcan [1] - SpammCan [2] - tg1-2 [3] - SpammCan [4] - simply a wrapper directory - An svn checkout of the code from svn://chrisarndt.de/projects/SpammCan/trunk - a virtualenv as in - This will be different things, first a tg2 quickstart then a link to our original code in [1] Setting the Environment I actually cheated and used bazaar but that is because I found this as a great oportunity to test bzr-svn So lets start with $ cd spammcan $ bzr branch svn://chrisarndt.de/projects/SpammCan/trunk SpammCan $ cd tg1-2 $ source bin/acticate $ ln -s ../SpammCan/ . $ cd SpammCan $ python setup.py develop Testing our instalation - Now lets see if everything runs as expected:: - $ bootstrap-spammcan $ start-spammcan $ firefox That should take you to and everything should work, go ahead and paste some spamm! Installing TurboGears 2 - So back to work, shut down cherrypy and let's install TG2 into the virtualenv:: - $ easy_install -i tg.devtools - you may have to run:: - $ easy_install -U setuptools Upgrading our code structure - The steps we are going to follow are: - create a tg2 project by the same name - move all the files that are new - move all the files that have changed place or name - do changes to the files (imports and such) - Do complex changes to the files Now we will temporally delete our SpammCan project from the virtualenv and create a tg2 project by the same name, The reason we are doing this is to preserve the package name: $ rm lib/python2.5/site-packages/SpammCan.egg-link $ vi lib/python2.5/site-packages/easy-install.pth In the last file we need to delete the line that contains SpammCan on it, it's a little tricky but unfortunaly setuptools doesn't have a command for this. Anyway if it did it will delete all our tg1 dependencies which we don't want. Now On to the TG2 project: $ cd .. $ rm SpammCan $ paster quickstart Enter project name: SpammCan Enter package name [spammcan]: Do you need authentication and authorization in this project? [yes] We'll reply yes to auth even if that isn't used fully in SpammCan to provide a more complete example. Upgrading the files I'll use a very nice diff/merge tool for linux called meld, so you can diff the trees, on windows you could use winmerge We'll diff so our original code is on the left and the tg2 stub is on the right: $ meld ../SpammCan/ SpammCan/ Files to ignore - the .bzr, .svn directories - egg-info dir as this is rebuild by setuptools Files to copy without changes (new files) - ez_setup , this will provide a bootstrap of your tg project - config/environment.py - config/middleware.py - i18n (new i18n package) - lib (a common place to but generic code) - tests/__init__.py (empty in tg1) - websetup.py - README.TXT - anything still revelant to your app from the templates directory (debug.html is interesting) Files that changed names or location - tests/test_controllers.py -> tests/functional - tests/test_model.py -> tests/test_models.py - static -> public You should rename your static dir public and diff again so you could take the advantage of the new icon set: $ cp ../SpammCan/spammcan/static/ ../SpammCan/spammcan/public -r $ meld ../SpammCan/spammcan/public/ SpammCan/spammcan/public/ .. note :: We need to upgrade the TG2 quickstart with the new CSS layout from 1.1 and then add a section here on why it's so great Depending on your TG1 quickstart project (which is mostly everyone as tgbig was a big mistery) you need to upgrade your model.py and controllers.py files into packages, both packages are totally flexible but this is the recommended setup. $ cd ../SpammCan/spammcan $ mkdir controllers model - model now contains 3 files controller contrains 5 files you should copy all those over:: - $ meld . ../../tg1-2/SpammCan/spammcan/ Files that changed completely - For now we'll just copy these over - config/app_cfg.py - development.ini - test.ini Special attention - These two are tricky and you should do a merge of their content which is outside the scope of this tutorial - setup.cfg (if you havent changed this you could copy the TG2 file below the tg1 file) - setup.py (copy the first 6 lines of tg2's if you want ez_setup to work, I suggest you copy the tg2 setup function to your setup.py I have renamed it setup_tg2 with the following little hack: def setup_tg2(**kargs): pass setup_tg2(... ) - We are now done with the files we can delete the TG2 quickstart and replace it with our original project:: - $ cd ../../tg1-2/ $ rm -r SpammCan/ $ ln -s ../SpammCan/ . Upgrading the imports - And lets see if everything still runs:: - $ cd SpammCan $ python setup.py develop $ start-spammcan Sadly this probably didn't work you need to fix your model and controller imports as the package takes precedence over the file. You can accomplish this by copying the file to the directory and adding the following imports: $ cp model.py model/tg1.py $ cp controllers.py controllers/tg1.py $ vi controllers.__init__.py from controllers import * $ vi model.__init__.py from model import * Although you will probably want to give them better names - Now lets try again and our TG1.1 app should still work:: - $ start-spammcan Upgrading the template This step is trivial if you are using genshi you just need to copy over the files. And your done The only change you should probably do is master.html, which is documented below Upgrading the model We are going to split our model.py into two files auth.py and spamm.py, if you haven't done any changes to the auth code you can skip the auth.py part as that is 100% backwards compatible. As for the rest of the code if you are already using SQLAlchemy you should just change some imports turbogears.database.* doesn't exists anymore so you will have to import SQLAlchemy directly except for the metadata defined in your model.__init__.py again we can use model.template as a base for our diff turbogears.database.mapper -> sqlalchemy.orm.mapper turbogears.database.metadata -> p.model.metadata TODO add a comment about SAClass.query Upgrading the controller We are going to try to keep things running for both TG1 and TG2 therefore all our code imports will happen in __init__.py and we could switch this to tg1 or tg2 controllers - The process here will be: - copy the controller to a tg2 file (ticket #2075 provides a good template), - run paster serve , see what fails - fix the imports - run paster serve - fix other code List of imports that changed - turbogears.controllers -> p.lib.base.BaseController - turbogears.expose -> tg.expose - turbogears.flash -> tg.flash - turbogears.redirect -> tg.redirect - turbogears.validate -> tg.validate - turbogears.url -> tg.url - turbogears.validators -> formencode.validators - turbogears.config -> turbogears.configuration - cherrypy.request -> tg.request - p.model.session -> p.model.DBSession List of components that changed - @error_handler isn't a decorator anymore but a parameter to validate - identity -> repoze.who/what * - cherrypy.request -> tg.request - cherrypy.NotFound -> abort(404) - SAclass.query -> p.model.DBSession.query * for the running part we could trick setuptools like this: def setup_tg2(**kargs): pass def setup_tg1(**kargs): pass #setup_tg2 = setup setup_tg1 = setup setup_tg2(...) setup_tg1(...) SpammCan Specific By this point any standard TG project should work, below several features used by SpammCan and their ports, therefore every section below is optional SQLite Database File devdata.sqlite becomes devdata.db, this is simple but could confuse you if you never changed the default dburi tg_css,tg_js,static_filter_dir Have all been removed, use tg.url static_filter_dir is now config.paths.static variable providers In TG1 we used to have a trick where we will stick aditional variables to all templates by providing a function and appending it's results to the 'variable_providers' for example: .. code-block :: from turbogears.view import variable_providers def add_global_tmpl_vars(vars): vars['motd'] = message_of_the_day() vars['abs_url'] = absolute_url vars['tg_version'] = tg_version variable_providers.append(add_global_tmpl_vars) in TG2 this behavior as been standarized in a config call, you will need a function that returns a dict or a Bunch and then add a line to your app_cfg.py for example in lib/helpers.py: .. code-block :: def add_global_tmpl_vars(): return dict( motd = message_of_the_day(), abs_url = absolute_url, tg_version = tg_version ) - in config/app_cfg.py:: - from spammcan.lib import app_globals, helpers (note this line is the default import) base_config.variable_provider = helpers.add_global_tmpl_vars Database Initialization Script - The bootstrap module is replaced with websetup.py which should be called as:: - $ paster setup-app development.ini util.py -> lib/helpers.py in general helpers.py is a set of functions you could use anywhere on your code you could optionally add another module to lib, but helpers has the advantage of being autopopulated to the templates as h.function() redirect no longer supports a list (see #2080) In TG1 you could do this redirect(['/paste', guid]) and it will redirect to /paste/guid where guid is the variable of course cherrpy.request -> webob.request TODO see:
http://docs.turbogears.org/2.0/RoughDocs/1.1Migration
2009-07-04T07:00:43
crawl-002
crawl-002-013
[]
docs.turbogears.org
2008 GSoC Application This application was submitted here on March 12, around 13:00 UTC. About Your Organization What is your Organization's Name? TurboGears 2. What is your Organization's Homepage? 3. Describe your organization.. The main administrator contact is: [email protected] Main administrator: - Christopher Arndt: [email protected], Google shortname: strogon14 Backup admins: - Christopher Perkins: [email protected], Google shortname: percious17) - Mark Ramm: [email protected], Google shortname: mark dot ramm He is the treasurer of the TurboGears project. He will handle any financial issues concerning our GSoC participation. 4. Why is your organization applying to participate in GSoC 2008? What do you hope to gain by participating? TurboGears wants to be a major contender for the position of the leading web framework in the Python world, and we think it is widely recognized for its flexibility and ability to combine best-of-breed technologies to help the web developer create web content more efficiently. Competing web frameworks have a broader developer toolset because they make sacrifices in flexibility to add functionality. TurboGears is on the cusp of adding its own feature-rich toolset which could make things like dynamic web interaction a lot easier to do, while maintaining it's flexibility. TurboGears is also currently undergoing a transformation from TurboGears 1 to TurboGears 2, and we are adding industrial strength features that will allow TurboGears to become a great platform to get started quickly and easily, and a great platform for scalability. TurboGears uses widely used Python components, and it's been our goal to make everything we do as widely reusable as possible within the greater python community. TurboGears needs a few vibrant developers who are eager to get their hands dirty and make a difference. We are hoping to gain some fresh developers with new ideas about how to make web programming more fun. The projects TurboGears is putting forth require a significant amount of collaboration between them, something that reflects TurboGears's core values. It is our hope that with this collaborative effort we will be able to provide developers with a new toolset, which is cool and fun to use, while maintaining a well-documented, usable set of components which can be extended. We also hope to raise the visibility of TurboGears in the community and demonstrate that it's an actively developed project with a healthy developer and user base. 5. Did your organization participate in past GSoCs? TurboGears maintainers have sponsored projects like SQLAlchemy Migrations, which were not TurboGears specific in the past, using PSF as a GSoC mentoring organization. We have also submitted 4 tasks to GHOP through the PSF project, which were all completed. Those tasks were based on web framework advocacy and can be viewed here: 6. If your organization has not previously participated in GSoC, have you applied in the past? If so, for what year(s)? TurboGears applied in 2006 when it was still a very young project, and was not accepted, but TurboGears community members did sponsor work that year using the PSF as a mentoring organization (see also the answer to the previous question). 7. What license(s) does your project use? MIT License Component projects may use other (all open source licenses. For details see our Licensing page: 8. URL for your ideas page? 9. What is the main development mailing list for your organization? 10. Where is the main IRC channel for your organization? #turbogears @ freenode.net 11. Does your organization have an application template you would like to see students use? If so, please provide it now. Dear Student, Please fill out the application form below and send it to the TurboGears GSoC project administrator ([email protected]) by email. We will then contact you with information about the next steps. The TurboGears GSoC Team Applicant --------- Name Obvious, no? Contact information Your email address, organization (university etc.) Experience Education, Job experience, work on other open source projects. Background Tell us about yourself, include information about why you want to participate in the GSoC program and the TurboGears project in particular. What can you bring into this project so that both sides will benefit from the program? Project ------- Summary One-line summary of your proposed project Mentor Is there a specific mentor you would like to request? Description A longer description of your project, including goals, tasks, and requirements. Milestones Please provide a list of milestones with expected completion dates. Testing What methods do you plan to use to assure quality in your project? 12. Who will be your backup organization administrator? Please include Google Account information. [email protected], [email protected] About Your Mentors 1. What criteria did you use to select these individuals as mentors? Please be as specific as possible. All of our mentors are core developers of either the TurboGears project itself or founders of one of the component projects like Genshi, DBSprockets, registration etc. They have volunteered to be mentors and are known in the TurboGears community for a long time. Many of them have already acted as mentors in previous GSoC or GHOP installments. We also have a GSoC Mentor page here: Every project idea on the GSoC ideas page lists at least one possible Mentor, most have multiple mentors. 2. Who will your mentors be? Please include Google Account information. [list of mentors' email addresses] About The Program 1. What is your plan for dealing with disappearing students? It is the nature of distributed, collaborative development, that we can only interact with students through remote communication. To ensure that they stick with their projects, we encourage them to set realistic goals for themselves and require them to set specific milestone dates for the completion of their tasks. This will give us some measure of control and opportunity for frequent feedback and should also keep up the students motivation by giving him reassurance and opportunity for evaluation. We will ask mentors to plan an evaluation IRC meeting with their student(s) every two-weeks (apart from being available for questions the rest of the time, of course). This will also serve as an opportunity for assessment of progress and appreciation of the students work, but also as a measure to detect and counteract signs of an impending drop-out at an early stage. In addition, all of the projects use agile development techniques, so the loss of a person does not really mean the loss of a project or the results achieved so far. If a student drops out early enough, we would want to try and replace them, and if all is lost, well there is always garbage collection... 2. What is your plan for dealing with disappearing mentors? Each project will have mentor and a backup mentor. If the backup mentor is for some reason unavailable then Mark Ramm and other project leaders have promised to step up and take over. Of course, we doubt that will happen, and we're pretty sure that even in that case someone else can be coaxed into participating by one of the administrators... possibly with bribes. 3. What steps will you take to encourage students to interact with your project's community before, during and after the program? Before the program students are welcome to participate as any other developer would, through the wiki, with patches, and on our message boards. We put a specific call-out for Google Summer of Code participation, and made our application as well as our ideas pages freely editable wikis. Our project ideas have a collaborative element, which requires that students interact with other students, as well as within the community to solve their problems. Most of our mentors are only a few minutes away either by email or global message boards. TurboGears requests that all technical discussions are purposefully made public so that everyone has a say when there is a question of implementation. We would strongly encourage students to create a personal page on our wiki where they can present themselves and their project(s). When they produce the first results, which can be integrated into the TurboGears project, they will of course be listed on our contributors page. We will also encourage students to keep a blog with a diary of their project progress and can give them advice and help on setting this up. The blogs can then be added to the TurboGears planet blog aggregator (planet.turbogears.org), where they will reach many other TurboGears developers. 4. What will you do to ensure that your accepted students stick with the project after GSoC concludes? The projects we are proposing as ideas are important to our framework, and they will continue to be developed in the open environment they were born into when the student has finished her tenure. TurboGears has a sprint about once a month and developers from across the globe are encouraged to attend. It is our hope that these monthly sprints would encourage a new developer to keep up the good work even after their project has met it's expectations. While we cannot offer jobs or other monetary perks, having your name associated as a developer of TurboGears is a good way to help obtain employment opportunities. Lately the TG board has been swamped with requests for employees, and we are currently developing a TurboGears specific job board to manage that need. In addition, all of our mentors are committed to treating students as full members of the TurboGears development community, and helping them to integrate their work into the community development efforts as their individual projects progress. Our goal is to create ongoing relationships, encourage open source participation, and help students not only accomplish their technical tasks, but also learn how to work with others in open source projects.
http://docs.turbogears.org/GSoC/Application2008
2009-07-04T07:01:26
crawl-002
crawl-002-013
[array(['/wiki/turbogears/img/smile.png', ':-) :-)'], dtype=object)]
docs.turbogears.org
Downloading and installing a TurboGears beta version. Quickstart instructions To install the latest TurboGears beta version, do the following: Download tgsetup-betaversion.py from the TurboGears web site. Run the following in your command shell (use sudo only on Unix/Mac OS X if you are installing as a non-root user): [sudo] python tgsetup-betaversion. Requirements - Windows, Mac OS X 10.3/4/5 or Linux/Unix - Python >= 2.3.x (Of course. Details On some systems you may need to first install some of the above mentioned requirements. Though we don't provide detailed instructions for this for the TurboGears beta version, you can refer to the matching section of the stable version installation guide for some ideas. Database installation and configuration Please refer to the stable version installation guide. Where are the download files? There is a list of all files hosted on turbogears.org available if you need to do something manually. Specific version installation Installing a specific version of TurboGears is always possible since we keep all our files online, but you need to know that in this particular case you'll have to install SQLObject or SQLAlchemy manually at the minimum. Run a command line similar to this one and substitute the version number at the end with the desired one: [sudo] easy_install --script-dir=/usr/local/bin \ -f \ TurboGears==1.0.4b3 This requires setuptools to be present on your machine. If you need to install setuptools, do so by following the instructions below: wget python ez_setup.py This will download and install setuptools on your machine, you can now issue commands like easy_install for installing SQLObject or SQLAlchemy: easy_install "SQLAlchemy>=0.4" or: easy_install SQLObject
http://docs.turbogears.org/1.0/InstallBeta
2009-07-04T07:03:53
crawl-002
crawl-002-013
[]
docs.turbogears.org
Essay published in Leven als Dalit: ‘kastelozen’ in hedendaags India / Dalit Lives: ‘outcastes’ in Contemporary India by Paul van der Stap (photos) and Elisa Veini (Titojoe Documentaries 2005. ISBN 90-809375-1-7). Martin Macwan likes to tell of the amazement of the public when he called for attention to the problem of discrimination on the basis of caste in the UN conference against racism in Durban in 2001. The National Campaign on Dalit Human Rights (NCDHR), of which Martin was the chairman, had been recently set up to address caste discrimination as a violation of human rights. It became clear in Durban that even many human rights activists were not aware of the injurious aspects of the caste system. India enjoyed the general reputation of a peace-loving country that had supported the South African anti-apartheid movement, and it was unknown to many that India is today the largest country, in which there is caste discrimination. Even less well known was that people are systematically discriminated on the basis of their work and descent in Nepal, Pakistan, Sri Lanka, Bangladesh, Japan and certain African countries as well. Although the conference did not pass a resolution on caste discrimination, the silence was broken. The ‘hidden apartheid’ had entered the international arena. There are more than 170 million Dalits in India, known as ‘untouchables’ or ‘outcastes’, more than one-sixth of the Indian population. They experience violence, discrimination, and social exclusion on a daily basis. In spite of laws prohibiting caste discrimination, Dalits are socially and economically the weakest group of society. Already in the time of Gandhi, quotas were established for Dalits and other oppressed groups at universities and in government jobs, but these provisions have improved the situation of only a small group. The discrimination has remained intact. In many ways the situation of the Dalits has even worsened in the last few decades. Violence against Dalits has sharply increased. The Dalit Human Rights Monitor of the human rights organisation People’s Watch Tamil Nadu reported nearly two hundred atrocities against Dalits in the first three months of 2004 alone. The reported cases are only a small share of the atrocities that take place, as most cases are never reported to the police. According to Henri Tiphagne, the director of People’s Watch, the increase in atrocities is directly linked to growing awareness among Dalits; dominant groups in society accept with difficulty the Dalits’ demand for rights. The shelves in the documentation centre of People’s Watch are filled with extensive, well-documented accounts of murder, rape, torture by the police and private gangs, social exclusion, slavery and humiliation. In more than three-quarter of the cases the victims are Dalits. The murder of two Dalits from Sennakarampatti village in Tamil Nadu is painfully illustrative of the pitfalls obstructing a fair trial. The victims were wage workers who were murdered in 1992 by villagers of a higher caste after they had acquired land at a public sale that used to belong to a temple. It was unthinkable for the higher castes to have Dalits living on temple ground. Sennakarampatti is one of the innumerable villages where Dalits are not allowed to enter the temple; the main roads are forbidden to them as well. In the village the Dalits form a minority of not more than fifty families in contrast to the thousand families from the land- owning kallar caste. The judge knew that there were caste tensions in the village, but he refused to take them into account. According to the Scheduled Castes and Tribes Prevention of Atrocities Act (SC/ST Atrocities Act) which has existed since 1989, a judge can be dismissed if he consciously ignores the facts. The lawyers’ collective SOCO Trust, which advocated the interests of the victims’ relatives, appealed on the basis of this act and succeeded in bringing the case to the High Court. It is now twelve years and five murders later. Murugesan, a Dalit and the leader of the village council, was murdered together with four other Dalits because he intended to question the proceedings. After Murugesan’s death, some compensations were paid to the relatives, but the court has failed to pronounce a verdict. The murderers are still free and the frustrated Dalit community is strongly divided. Lajapathi Roy, advocate and human rights consultant in SOCO Trust, says that the courts regularly ignore the caste dimension of atrocities. The victims are seldom compensated for the damage done by persons from higher castes. The offenders are rarely sentenced, and it requires a great deal of effort to have a case re-opened. Paul Divakar, the present chair of NCDHR, has his doubts about the will of many judges to really implement the SC/ST Atrocities Act. NCDHR was founded in 1998, when various organisations started to monitor the failing compliance with the anti- discrimination laws. Today, NCDHR is the pivot and the uniting power of the Dalit organisations. Such an umbrella organisation, says Divakar, was unheard of as recently as fifteen years ago. The attention paid to their rights has enhanced the self-confidence of the Dalits as a community, and the international break-through in Durban has given new courage to the Indian supporters. The authorities are now forced to accept NCDHR as a serious partner. The change is tangible, although it will still take a long time before Dalits will count as full members of society. The caste system originates in the Vedas, the classical Hindu writings that divide society into four major castes, or varnas. The position of each caste depends on its descent from a certain part of the body of Brahma, the god of creation. The highest caste, brahmins, originate from the mouth of Brahma. The ksathriyas originate from the arms, the vaishyas from the belly and the shudras from the feet. Accordingly, different tasks are ascribed to each of the four castes. The brahmins are thinkers, philosophers and priests who lead society spiritually. The ksathriyas are rulers and soldiers who guard the state against enemies. The vaishyas are merchants and landowners, and the shurdras are workers. The place of the individual person depends on the social position of his caste and the merit, or karma, that he gains in order to prosper more in the next life. Because the shudras are not born twice like the three other castes, they must serve the others. The Manusmiriti, or The Law of Manu (700 A.D.) states that all lower castes belong to the shudras. Here, there is no mention of the ‘outcastes’ or ‘untouchables’, who would be ritually too impure to be able to belong to the shudras. The separation of the ‘outcastes’ dates probably from the later division in sub- castes, or jatis: social groups within which one marries. Over the course of time, the jatis became decisive for all social relations. Although India today is a secular, multi-confessional and multi-cultural state, Hinduism continues to dominate culturally and socially. In certain ways, modernisation has led to individualisation of society – for instance, one’s profession does not necessarily depend on the jati anymore – but none the less the old, hierarchical relations between the groups have remained intact. The ambivalence towards marriage is a good example; marriage is a market-place of seemingly individual choices, but the choice is still made along the lines of caste, not across them. One still marries within one’s own caste with an arranged partner who has been found either through family relations or the new opportunities offered by internet. Sociologist Dipankar Gupta finds a general link between increasing social intolerance and modernisation. According to him the membership of a caste or sub-caste is now negotiable. Ever more often people choose their caste strategically in order to belong to a certain political group, but at the same time they make an effort to legitimise their descent with complex genealogies.1 The hierarchy gains new weight, because the caste lines have become more elastic, but they are also more ambivalent. The hierarchy is so tightly woven into the social fabric of India that Christians, Muslims and other non-Hindus also employ the discriminatory categories. Dalit Christians will be segregated from other Christians, and there is mention of Dalit Muslims who are set apart by other Muslims. The vicious part of this system is that it is valid even within the Dalit community, and that the worst oppressors of Dalits are the groups that are just ‘above’ them in the hierarchy. In South India, these are the land- owning farmers for whom Dalits work. Not so long ago farmers would shout out their commands from across the field, so as not to have the air of the Dalits on their face. Mainly in rural areas it still happens that Dalits are not allowed to wear shoes, and that they must bow their heads when they are addressed by a person of a higher caste. Nor has the prohibition of entering the village temple or accessing the common well disappeared, and it is still common to find Dalits living outside the village. If it is not possible to banish Dalits, a wall may be built to keep them from the immediate neighbourhood. Time and again, justification for the systematic segregation of the ‘untouchables’ is found in the supposed idea of purity. People who labour physically carry all sorts of impure matter with them: dirt and earth, but also sweat. How relative the dirtiness of the ‘untouchables’ is becomes clear in the fervent pursuit of Dalit voters before elections. Sometimes Dalits even pass as members of the ultra-Hindu parties – provided they renounce their origins and community. On a September afternoon in 2003 Muthumari, a 38 year-old wage worker from Thirumangalam in Tamil Nadu let her cow out to pasture. In the field she came across her neighbour Raja. She had successfully ignored the advances of Raja for a couple of months, but this time the man would not take no for an answer. When Muthumari still resisted, he tore her sari and called her a ‘casteless dog’. The episode ended with Muthumari’s escape, but that evening, Raja, his wife and other land-owning kallars, who dominate in the village, paid her a visit. After a short but rough argument, Raja’s wife poured a bowl of human excrement on Muthumari. The local police first refused to do anything about the incident and Raja and his wife were only pursued after an NGO took up the case. The incident resulted, however, in the landowners’ refusal to employ any Dalits, knowing full well that the landless wage workers are completely dependent on them. They also require Dalits to get off their bicycles and take off their shoes before entering the village. Sexual assault and rape of Dalit women and girls occur on a daily basis in rural India. Women are generally seen as dependent appurtenances of men, and marriage is not only taken for granted, it is also seen as a girl’s duty. In all social strata, women have to face a difficult struggle if they want to lead an independent life. Equality to men is virtually non-existent. The submissive and subservient behaviour of a wife guarantees the honour of the man. Dalit women, it is said, are oppressed three times over: they are poor, they are women, and they are Dalits. Most Dalit women work in the field as wage workers, just like men. Especially men from higher castes see a working Dalit woman as open game. In some areas this is brought so far that if a Dalit man returns home and sees a strange pair of sandals at the doorstep, he knows immediately what is going on and waits outside until the visitor has left. The system of temple prostitution is the most extreme form of exploitation of Dalit women. The prostitutes, joginis or devadasis, are initiated by an older prostitute when they are young. Each village has one prostitute who takes care of the initiation; she bears the name of the goddess Yellamma. It is believed that the prostitutes personify the goddess, who calls them to join her through prostitution. During certain festivals, the prostitutes perform a public function; the women and girls are expected to go into a trance at given moments. They find these ritual performances thoroughly unpleasant. They particularly dislike dancing near-naked in front of the entire village; rural Indians consider this sort of self-exposure as obscene. The religious context alters nothing to the experience of public humiliation. Grace Nirmala, director of the NGO Jogini Vyavasta Vyathirekha Porata Sanghatana in Andhra Pradesh, estimates that temple prostitution has existed for more than two thousand years. Originally, Dalit girls were offered to a local god as gifts to ward off illness and other evils. Later the system developed into a powerful means of caste exploitation and control of Dalit women. The underlying idea is that Dalits do not have command of their own lives and bodies, rather must serve higher castes at all times and in all imaginable ways. The system is closely connected to the traditional Hindu values that have remained intact in rural areas. In cities, where commercial prostitution is widespread, only older joginis are found in the temples, but the traffic in young Dalit girls from rural areas to urban brothels is a booming business.2 Parents agree to the initiation mainly for economic reasons, although they would claim to have made the decision for the increase in status that is ascribed to the family of an initiated girl. What is more, she stays at home and takes care of her elderly parents: it is evident that a prostitute never marries. It is perhaps no accident that it is the youngest daughters and girls without any brothers who are most commonly initiated. A few joginis say that they were attracted by the many promises of the men, but the ornaments and other gifts are usually short- lived. Once a jogini gives birth to a so-called ‘fatherless’ baby, she is much less attractive. Children of a jogini share the plight of their mother; they are set apart and can only marry children of other joginis; moreover, many daughters end up as prostitutes as well. Ashamma, 28, from the village of Palla near Narayanpet in Andhra Pradesh was initiated under pressure of the village leaders, who told her parents that nobody would marry her because she was a girl with a history. As a young girl she had had an affair with a much older man, who left her as soon as she became pregnant. Ashamma is one of the few joginis who succeeded in escaping from prostitution, convinced by Grace Nirmala and her colleagues that a better alternative was possible. Ashamma had to overcome great fears, but finally she took the challenge, and now she runs a small shop that earns her just enough to make ends meet. But as an ex-prostitute with her own business she violates the narrow village moral. The men hanging around Ashamma’s shop seemingly nonchalantly make no effort to conceal their disapproval, for Ashamma has done more than just step out of the prostitution: she has rejected a system in which the individual’s destiny is determined at birth. Nearly thirty years ago in the early days of Dalit activism in the area, young Dalits in Bhal, Gujarat, had a serious conflict with the more conservative village elders about their right to protest against the exploitation of women. For the first time ever, three youngsters took up the issue against a son of a landowner who had violated a Dalit woman. The boys were proud of their action, but the village elders were less pleased and lost no time in offering their apologies to the landowner for the thoughtlessness of the younger generation. After a wave of threats, the incident ended up in the suicide of a Dalit youth, who feared some form of collective revenge from the landowners. Dalit activist Martin Macwan was working in Bhal at the time and followed the course of events from close by; the uncompromising submission of Dalits in the rural area was new for the urban activist. Although he had been engaged in the struggle for Dalit rights as a student, there was a real turning point in his life in 1986 when four Dalit activists were shot dead for demanding their rights. ‘For the first time I realised what caste meant,’ Martin says in the recently built training centre of Navsarjan Trust, the organisation that he founded in 1988. ‘I understood what it really means to question the caste hierarchy. I understood also that this was no incident between two individuals or groups. It is a system. You ask for minimum wage – and the response is violence. You are elected to the city council – and the response is violence. You demand your rights to land – and the response is violence. This is a history of violence. Why? Because others think that they will lose their privileges if Dalits can lead a life worthy of a human being…’ Over the course of the years Martin and his collegues have realised that it is not enough to question the behaviour of others. The change must start with the Dalits themselves. ‘In Navsarjan we have two non- negotiable principles. The first one is the principle of equality. Discrimination is widespread among the Dalit community. There will always be a group or a sub-group that can be seen as lower and be deliberately exploited. Within the organisation we all are equals. Another critical issue is the exploitation of women. Men who, as Dalits, know perfectly well what it means to be discriminated against, treat their womenfolk as inferiors. It is of elementary importance to understand that the powerlessness that many of us experience on a daily basis, originates in the conflicts in ourselves.’ In Martin’s view, the variety of ideas on the liberation of Dalits is an advantage rather than a disadvantage. He says that it does not matter so much whether one organisation chooses to put more emphasis on economic factors and another on culture. As long as the discussion continues and people are stimulated to think for themselves and take action, the Dalits and their movement will have a future. On the wall behind Martin’s desk hangs a photo of Ambedkar, no doubt the most influental Dalit activist from the era before independence. ‘Gandhiji, I have no homeland,’ Ambedkar is said to have answered to Gandhi, who praised him for his patriotism. ‘How can I belong to a country where we are treated worse than dogs and cats?’ continued Ambedkar to Gandhi, who was taken by surprise, because he had thought that the elegant advocate was a Brahmin.3 ‘Babasaheb’ Ambedkar was Minister of Justice for a short time in the first government of Nehru. He maintained an uncompromising position against Gandhi, whose welfare politics he found patronizing. Dalits had to be able to organise themselves; they were not an oppressed people, who needed to be helped by others, but strong people with their own talents. According to Ambedkar, Hinduism was the greatest obstacle to the liberation of Dalits. His resistance to the caste system led him to convert to Buddhism at the end of his life. Even today, thousands of Dalits follow his example every year. Ambedkar had many followers even in his own time. Chandubhai Maheriya, officer in the Ministry of Education of Gujarat, grew up in the working-class area of Rajpur Hirpur in Ahmedabad. He was the youngest son of a cotton factory worker, who was strongly influenced by the ideas of Ambedkar. His family was poor, but the father wanted his children to study, which was not easy in the densely populated area where there was animosity between the various Dalit groups. Chandubhai tells: ‘Originally, we were rohits, leather workers, but the house we lived in was dominated by vankars, weavers, who considered themselves higher and better. They pestered us until we had to leave. We moved to another block where there were no toilet facilities, so that we had to keep going back to the old house. The vankars made us pay for using the toilet, two rupees a month. It was illegal of course. At school most teachers were vankars; they struck me the hardest and most often – just because they knew that I was a rohit. The only support I received was from teachers who were not Dalits. I was good at school, but I wore torn clothes. A lady teacher gave me a new pair of trousers and a shirt. When I got back home, my elder brothers snatched them away from me, because, well, I was the youngest and the weakest and I would die soon anyway… That was also their attitude towards food. I have survived thanks to the leftovers from my father’s lunchbox. As soon as I saw him coming out of the factory, I would run to him. He had always saved something for me and I ate it up on the way, before my brothers could get a hold of it. – You’d be surprised to know how many children have survived in this way.’ Rajpur Hirpur is one of the many over-populated working-class areas in Ahmedabad that are nowadays rife with unemployment and disillusion. After the closing of nearly one hundred cotton mills in the 80s and 90s, the workers, who are mainly Dalits and Muslims, eke out a living by doing odd jobs. Whole families collect plastic bottles, or they prepare printing leftovers for recycling: badly paid piecework with a high risk. Someone sells tea and sweets from a wooden cart at a street corner. Behind the next corner yawn the ruins of Sarangpur Cotton Mill No. 2. The mill was closed in 1996 as one of the last, and destroyed shortly thereafter. Since then, the inhabitants are trying to take over the area, for the living situation in Rajpur Hirpur is all but explosive; the population has grown rapidly from 15,000 to 60,000. ‘Of course Dalits were discriminated against,’ says former labour union leader Bhudarbhai. ‘Most Dalits used to be weavers, but in the mills they were allowed only to spin, because the idea that the spit of a Dalit might get into the fabric was inconceivable to higher castes, and weavers customarily repair broken thread with spit. Later on, in some mills where Dalits were allowed to weave, the bosses arrived at a pragmatic solution: they sprinkled water on the fabric to undo the pollution.’ Dalits and Muslims were also forced to eat their lunch separately, because they were ‘meat eaters’. ‘In reality, higher castes also eat meat, but in public they always pretend that they don’t. The taboo dies hard,’ says Bhudarbhai. In spite of the discrimination, working in the mills had also its advantages. The mill workers were well organised. The labour unions that were set up in Gandhi’s time took care of the basic social rights of the workers. In the contemporary factories, the labour unions are only tokens. The workers have hardly any chance of negotiation; a Dalit who is employed has to be content with a daily wage of fifty rupees, less than the official minimum wage. Although working families like Chandubhai Maheriya’s had to struggle to make ends meet, they were aware of the importance of education for their children. Education was also encouraged by labour unions; workers wanted to see their children have a better future. Dalits from the present adult generation share memories of working during the day and studying at night. Baskaran, a Dalit activist from Madurai recounts: ‘Our house had no electricity and I studied outside by the street light. As of my early school days, I had no more than four or five hours sleep a night.’ Baskaran grew up in a sharply segregated cotton mill workers’ neighbourhood. Even today, the Mill Colony has a ‘Dalit Lane’, a street where mainly Dalits live. The tiny, dark houses are still inhabited by whole families. ‘We four brothers slept outside, our sisters slept in the room.’ The level of education of the younger generations has dropped dramatically. Many parents find it unnecessary to let their children study only to face unemployment later. In Rajpur Hirpur there are young people with an MA in English literature who repair old turpentine containers. Salary: one rupee for each container.. Dalits and other poor people face an uncertain future. Today, about 80% of the Dalits still live in rural areas, but the pull of the cities is strong. Like other rapidly industrializing countries in Asia, the ratio of urban to rural population in India – now 30% to 70% – is likely to change radically in the next few years. The rural areas are confronted with drastic changes, too. In the globalising economy, every opportunity is explored to grab land and natural resources in areas that were formerly considered to be remote. In 21 coastal villages in the Nagapattikam district in Tamil Nadu, the soil has been irrevocably polluted by commercial prawn farms. Prawns have become one of the major export products of the state during the last decades. Businessmen from Bangalore and Chennai, but also from as far away as Uttar Pradesh, are busy buying up uncultivated waste land, in exchange for which they promise not only money but also employment. Kanagasabai, a Dalit and the village leader of Kattur, shakes his head in disbelief. ‘Actually, they never employed more than four or five people in all of Kattur.’ Slowly the companies have pushed further inland to fertile fields. The villagers, who are mostly illiterate, were not aware of the damaging side-effects of the industry; they were too slow to understand that the brackish water used to water the ponds would effect the soil. Suddenly, their irrigation canals were drained by the prawn farms. In the meantime, whole villages have become uninhabitable. Kanagasabai tells laconically of the attempt of one of the farms to incorporate the Dalit graveyard as well. ‘It would seem they are not satisfied with just breaking us up; they won’t even let our ancestors rest in peace.’ In Kattur, as in other coastal places, there have been protests. A desperate fishing community smashed dozens of prawn ponds in a single night, and elsewhere prawn farms have been sabotaged by chemicals which have been thrown into the ponds. The reaction of the farms adds one more chapter to the long list of house burning, torture and rape, eventually in cooperation with the police and hired private guards. The very fact that the Dalits protest the loss of their livelihood and the destruction of their villages infuriates many landowners, government officers and politicians of the higher castes, as every form of Dalit protest is considered to be a potential danger to the continuance of the caste system. ‘That’s how it has to be,’ says a municipal officer in Vagoda in the district of Surendranagar in Gujarat, as explanation. ‘Castes are the basis of our society. If the caste system loosens up, the whole society will fall apart.’ Vagoda has three hundred families, out of which more than half are landowners. They depend on the labour of the landless Dalits, but they forbid them to enter the village temple. Why? ‘Because they are non- vegetarians. We, landowners and larger farm owners, are vegetarians. That’s the difference.’ The euphemistic appeal to ritual purity is a poor justification for the oppression of the Dalits. ‘Caste discrimination has been silently passed from generation to generation for three thousand years,’ says Martin Macwan. ‘That is not at all necessary. It does not require a PhD and a revolution to make an end to the prejudice. It is enough for parents to tell their children three times a day, like a dose of medicine, that they are not higher or lower than anybody else and that everyone is equal.’ One’s capacity to protest starts with the realisation that discrimination is a puff of hot air. It does not solve the real problems of poverty, unemployment and humiliation, but it gives the Dalits resistance. The one who can resist, understands that it is not self- evident to bow one’s head when being addressed by a landowner. The language can serve as a means of empowerment, too. ‘We have given the term “Dalit” a new meaning: Dalit is not a caste, but a moral position of people who believe in equality. This position makes us progressive; backward are those who think that people are inequal. We have turned the positions around, you see.’ Leven als Dalit / Dalit Lives is available in the webshop of Slowdocs Publishers or through your regular bookshop.
https://www.titojoe-docs.nl/elisaveini/dalit-lives/
2022-05-16T16:01:55
CC-MAIN-2022-21
1652662510138.6
[]
www.titojoe-docs.nl
Create machine catalogs Note: This article describes how to create catalogs using the Full Configuration interface. If you’re using Quick Deploy to create Azure resources, follow the guidance in Create catalogs using Quick Deploy.. The Manage > Full Configuration interface guides you to create the first machine catalog. After you create the first catalog, you create the first delivery group. Later, you can change the catalog you created, and create more catalogs. OverviewOverview When you create a catalog of VMs, you specify how to provision those VMs. You can use Machine Creation Services (MCS). Or, you can use your own tools to provide machines. - If you use MCS to provision VMs, you provide an image (or snapshot) to create identical VMs in the catalog. Before you create the catalog, you first use hypervisor or cloud service tools to create and configure the image. This process includes installing a Virtual Delivery Agent (VDA) on the image. Then you create the machine catalog in the Manage > Full Configuration interface. You select that image (or a snapshot of an image), specify the number of VMs to create in the catalog, and configure additional information. - If your machines are already available (so you do not need the management interface’s performance degrades significantly. Access images from Azure Shared Image Gallery When selecting an image to use for creating a machine catalog, you can select images you created in the Azure Shared Image Gallery. These images appear in the list of images in the Master Image screen of the Machine Catalog Setup wizard. For these images to appear, you must: - Configure a Citrix Virtual Apps and Desktops site. - Connect to the Azure Resource Manager. - In the Azure portal, create a resource group. For details, see Create an Azure Shared Image Gallery using the portal. - In the resource group, create a Shared Image Gallery. - In the Shared Image Gallery, create an image definition. - In the image definition, create an image version. RDS license check Creation of a machine catalog containing Windows multi-session OS machines includes an automatic check for valid Microsoft RDS licenses. The catalog is searched for a powered-on and registered machine to perform the check on. - If a powered-on and registered machine cannot be found, a warning is displayed, explaining that the RDS licensing check cannot be performed. - If a machine is found and an error is detected, Manage > Full Configuration displays a warning message for the catalog containing the detected issue. To remove an RDS license warning from a catalog (so that it no longer appears in the display), select the catalog. Select Remove RDS license warning.. Troubleshooting information is provided in the catalog creation wizard, and after you add cannot be obtained about a machine (perhaps because it had never registered), an: - MCS Storage Optimization creates a write cache style disk for each VM. - MCS added the ability to use full clones as opposed to the Delta disk scenario described in the previous section. Hypervisor features might also enter into the equation. For example: - Citrix Hypervisor IntelliCache creates a Read Disk on local storage for each Citrix Hypervisor. This option saves on IOPS against the image which might be held on the shared storage location. Hypervisor overhead Different hypervisors use containing 20 GB for the virtual disk, 16 GB for the swap file, and 100 MB for log files consuming. - Updating the catalog The Machine Creation Services (MCS) storage optimization feature is also known. - Achieve diagnostic improvements during machine catalog creation. - The VM write cache disk is created and formatted automatically when booting a VM for the first time. Once the VM is up, the write cache file mcsdif.vhdxis written into the formatted volume MCSWCDisk. - The pagefile is redirected to this formatted volume, MCSWCDisk. As a result, this disk size considers the total amount of disk space. It includes the delta between the disk size and the generated workload plus the pagefile size. This is typically associated with VM RAM size. Enabling MCS storage optimization updates When creating a machine catalog, the administrator can configure the RAM and disk size as follows: The machine catalog setup user interface of the web-based console: To enable the MCS I/O storage optimization feature,. With the MCS storage optimization feature enabled, you can configure the following settings when creating a catalog. These settings apply to both Azure and GCP environments. Configure the size of the disk and RAM used for caching temporary data. Select the storage type for the write-back cache disk. - For Azure, the following options are available: Premium SSD, Standard SSD, and Standard HDD. For more information, see Microsoft Azure Resource Manager cloud environments. - For GCP, the following options are available: Standard persistent disk, Balanced persistent disk, and SSD persistent disk. For more information, see Google Cloud environments. Choose whether you want the write-back cache disk to persist for the provisioned VMs. Select Enable write-back cache to make the options available. By default, Use non-persistent write-back cache disk is selected. - Use Use non-persistent write-back cache disk to control whether the write-back cache disk must not persist for the provisioned VMs in Azure. The disk is deleted during power cycles and any data redirected to the disk is lost. Using this option, you can use Azure temporary disk as storage because the option is suitable for non-persistent write-back cache disk. This reduces your storage cost and improves I/O performance. You can also use PowerShell. For details, see Using PowerShell to create a catalog with non-persistent write-back cache disk. Use non-persistent write-back cache disk. To use this option: - Select the check box Enable write-back cache. - Enter a valid positive disk cache size in GB. The VM will not work properly if the size is too small. - Select the option Use non-persistent write-back cache disk. - Use Use persistent write-back cache disk to control whether the write-back cache disk persists for the provisioned VMs in Azure and Google Cloud Platform (GCP). By default, persistent write-back cache disk is disabled, causing the disk to be deleted during power cycles and any data redirected to the disk to be lost. Enabling this option increases your storage costs. You can also use PowerShell. For details, see Using PowerShell to create a catalog with persistent write-back cache disk. Use persistent write-back cache disk. To use this option: - Select the check box Enable write-back cache. - Enter a valid positive disk cache size in GB. The VM will not work properly if the size is too small. - Select the option Use persistent write-back cache disk. - Use Retain system disk during power cycles to control whether to retain system disks for VDAs during power cycles. This behavior applies to both Azure and GCP environments. - Retain system disk during power cycles. By default, the system disk is deleted on shutdown and recreated on startup. This ensures that the disk is always in a clean state but results in longer VM restart times. If system writes are redirected to the RAM cache and overflow to the cache disk, the system disk remains unchanged. Enabling this option increases your storage costs but reduces VM restart times. Select Enable write-back cache to make this option available. - Retain VMs across power cycles. Select this option to retain your VM customization and to enable the VMs to be started through the Azure or GCP portal. Enable Retain system disk during power cycles to make this option available. Note: Azure ephemeral OS disk and MCS I/O cannot be enabled at the same time. For more information, see Azure ephemeral disk and Machine Creation Services (MCS) storage optimization (MCS I/O). Conditions for Azure temporary disk to be eligible for write-back cache disk You can use the Azure temporary disk as write-back cache disk only if all the following conditions are satisfied: The write-back cache disk must non-persist as the Azure temporary disk is not appropriate for persistent data. The chosen Azure VM size must include a temporary disk. The ephemeral OS disk is not required to be enabled. Accept to place the write-back cache file on Azure temporary disk. The Azure temporary disk size must be greater than the total size of (write-back cache disk size + reserved space for paging file + 1 GB buffer space). Using PowerShell to create a catalog with non-persistent write-back cache disk To configure a catalog with non-persistent write-back cache disk, use the PowerShell parameter New-ProvScheme CustomProperties. The custom properties are: UseTempDiskForWBC. This property indicates whether you are accepting to use the Azure temporary storage to store the write-back cache file. This must be configured to true when running New-ProvSchemeif you want to use the temporary disk as write-back cache disk. If this property is not specified, the parameter is set to False by default. For example, using the CustomProperties parameter to set UseTempDiskForWBC to true: -CustomProperties '<CustomProperties xmlns=" xmlns: ` <Property xsi: ` <Property xsi: ` <Property xsi: ` <Property xsi: ` <Property xsi: ` <Property xsi: ` </CustomProperties>' <!--NeedCopy--> Note: After you commit the machine catalog to use Azure local temporary storage for write-back cache file, it cannot be changed to use VHD later. Non-persistent write-back cache disk scenarios The following table describes three different scenarios when temporary disk is used for write-back cache while creating machine catalog. Using PowerShell to create a catalog with persistent write-back cache disk To configure a catalog with persistent write-back cache disk, use the PowerShell parameter New-ProvScheme CustomProperties. Tip: Use the PowerShell parameter New-ProvScheme CustomPropertiesonly for cloud-based hosting connections. If you want to provision machines using a persistent write-back cache disk for an on-premises solution (for example, Citrix Hypervisor) PowerShell is not needed because the disk persists automatically. This parameter supports an extra property, PersistWBC, used to determine how the write-back cache disk persists for MCS provisioned machines. The PersistWBC property is only used when the UseWriteBackCache parameter is specified, and when the WriteBackCacheDiskSize parameter is set to indicate that a disk is created. Note: This behavior applies to both Azure and GCP where the default MCSIO write-back cache disk is deleted and re-created when power cycling. You can choose to persist the disk to avoid the deletion and recreation of MCSIO write-back cache disk.--> Note: This example only applies to Azure. The properties are different in GCP environment. When using these properties, consider that they contain default values if the properties are omitted from the CustomProperties parameter. The PersistWBC property has two possible values: true or false. Setting the PersistWBC property to true does not delete the write-back cache disk when the Citrix Virtual Apps and Desktops administrator shuts down the machine from the management interface. Setting the PersistWBC property to false deletes the write-back cache disk when the Citrix Virtual Apps and Desktops administrator shuts down the machine from the management interface. Note: If the PersistWBCproperty is omitted, the property defaults to false and the write-back cache is deleted when the machine is shut down from the management interface.. For example, set New-ProvSchemeto use the write-back cache while setting the PersistWBC property to true:--> Improve boot performance with MCSIO You can improve boot performance for Azure and GCP managed disks when MCSIO is enabled. Use the PowerShell PersistOSDisk custom property in the New-ProvScheme command to configure this feature. Options associated with New-ProvScheme include: <CustomProperties xmlns=" xmlns: <Property xsi: <Property xsi: <Property xsi: </CustomProperties> <!--NeedCopy--> To enable this feature, set the PersistOSDisk custom property to true. For example:OsDisk`"-->. - from Manage > Full Configuration., the management interface > DaaS. - Select Manage. - If this is the first catalog being created, you are guided to the correct selection (such as “Set up the machines and create machine catalogs to run apps and desktops.”). The catalog creation wizard opens and walks you through the items described below. If you already created a catalog and want to create another, from Manage > Full Configuration, select Machine Catalogs in the left pane. Then select Create Machine Catalog. The wizard walks you through the pages described below. The pages you see may differ, depending on the selections you make, and the connection (to a host) you use. Hosts / virtualization resources. - a Remote PC Access catalog. The Machine Management page indicates how machines are managed and which tool you use to deploy machines. Choose if machines in the catalog will be power managed through the Full Configuration interface. - Machines are power managed through the Full Configuration interface or provisioned through a cloud environment, for example, VMs or blade PCs. This option is available only if you already configured a connection to a hypervisor or cloud service. - Machines are not power managed through the Full Configuration interface, for example, physical machines. If you indicated that machines are power managed through the Full Configuration interface. Master imageMaster image This page appears only when you are using MCS to create VMs. Select the connection to the host hypervisor or cloud service, and then select the snapshot or VM created earlier. Note: - When you are using MCS, do not run Sysprep on master images. - If you specify a master image rather than a snapshot, the management interface creates a snapshot, but you cannot name it. Do not change the default minimum VDA version selection. To enable use of the latest product features, ensure that the master image has the latest VDA version installed. might contain extra pages specific to that host. For example, when using an Azure Resource Manager master image, the catalog creation wizard contains a Storage and License Types page. For host-specific information, follow the appropriate link listed in Start creating the catalog. virtual disk size in GB and the drive letter. - If your deployment uses more than one zone (resource location), other tools (but not MCS): An icon and tooltip for each machine added (or imported) help identify machines that might not be eligible to add to the catalog, or be unable to register with a Cloud Connector., consider the space needed for: - Temporary data files created by Windows itself, including the Windows page file. - User profile data. - ShareFile data that is synced to users’ sessions. - Data that might be created or copied by a session user or any applications users may install inside the session. If you enable the Disk cache size check box, temporary data is initially written to the memory cache. When the memory cache reaches its configured limit (the Memory allocated to cache value), the oldest data is moved to the temporary data cache disk.. You cannot change the cache values in a machine catalog after it is created. Using CSV files to bulk add machines If you use the Full Configuration management interface, you can bulk add machines by using CSV files. The feature is available to all catalogs except catalogs created through MCS. A general workflow to use CSV files to bulk add machines is as follows: On the Machines. You can also export machines from a catalog on the same Machines page. The exported CSV of machines can then be used as a template when adding machines in bulk. To export machines: On the Machines page, select Export to CSV file. A CSV file containing a list of the machines is downloaded. Open the CSV file to add or edit machines as needed. To add machines in bulk using the saved CSV file, see the previous section, Using CSV files to bulk add machines. Note: - This feature is not available for Remote PC Access catalogs. - Export and import of machines in CSV files is only supported between catalogs of the same type. NIC (NICs)NIC (NICs) This page does not appear when you are creating Remote PC Access catalogs.. Specify the Active Directory machine accounts or Organizational Units (OUs) to add that correspond to users or user groups. Do not use a forward slash (/) in an OU name. You can choose a previously configured power management connection or select not to use power management. If you want to use power management but a suitable connection has not been configured yet, you can create that connection later and then edit the machine catalog to update the power management settings. You can also bulk add machines by using CSV files. A general workflow to do that is as follows: On the Machine Accounts. Machine identitiesMachine identities This page appears only when using MCS to create VMs. Each machine in the catalog must have a unique identity. This page lets you configure identities for machines in the catalog. The machines are joined to the identity after they are provisioned. You cannot change the identity type after you create the catalog. A general workflow to configure settings on this page is as follows: - Select an identity from the list. - Indicate whether to create accounts or use existing ones, and the location (domain) for those accounts. You can select one of the following options: On-premises Active Directory. Machines owned by an organization and signed into with an Active Directory account that belongs to that organization. They exist on-premises. Azure Active Directory joined. Machines owned by an organization and signed into with an Azure Active Directory account that belongs to that organization. They exist only in the cloud. For information about the requirements, limitations, and considerations, see Azure Active Directory joined. Hybrid Azure Active Directory joined. Machines owned by an organization and signed into with an Active Directory Domain Services account that belongs to that organization. They exist in the cloud and on-premises. For information about the requirements, limitations, and considerations, see Hybrid Azure Active Directory joined. Note: Before you can use hybrid Azure Active Directory join, make sure that your Azure environment meets the prerequisites. See Non-domain-joined. Machines not joined to any domain. For information about the requirements and limitations, see Non-domain-joined. Important: - If you select On-premises Active Directory or Hybrid Azure Active Directory joined as the identity type, each machine in the catalog must have a corresponding Active Directory computer account. - The Non-domain-joined identity type requires version 1811 or later of the VDA as the minimum functional level for the catalog. To make it available, update the minimum functional level. - The Azure Active Directory joined and Hybrid Azure Active Directory joined identity types require version 2203 or later of the VDA as the minimum functional level for the catalog. To make them available, update the minimum functional level. If you create accounts, you must have permission to create computer accounts in the OU where the machines reside. Each machine in the catalog must have a unique name. Specify the account naming scheme for the machines you want to create. For more information, see Machine account naming scheme. Note: Make sure that OU names do not use forward slashes ( /). If you use existing accounts, browse to the accounts or click Import and specify a .csv file containing account names. The imported file content must use the format: - [ADComputerAccount] ADcomputeraccountname.domain Ensure that there are enough accounts for all the machines you are adding. The Full Configuration interface manages those accounts. Therefore, either allow that interface to reset the passwords for all the accounts or specify the account password, which must be the same for all accounts. For catalogs containing physical or existing machines, select or import existing accounts and assign each machine to both an Active Directory computer account and to a user account. Machine account naming scheme Each machine in a catalog must have a unique name. You must specify a machine account naming scheme when creating a catalog. Use wildcards (hash marks) as placeholders for sequential numbers or letters that appear in the name. When specifying a naming scheme, be aware of the following rules: - The naming scheme must contain at least one wildcard. You must put all wildcards together. - The entire name, including wildcards, must contain at least 2 but no more than 15 characters. It must include at least one non-numeric and one # (wildcard) character. - The name must not include spaces or any of the following characters: ,~!@'$%^&.()}{\/*?"<>|=+[];:_“.. - The name cannot end with a hyphen (-). Also, leave enough room for growth when specifying the naming scheme. Consider this example: If you create 1,000 machine accounts with the scheme “veryverylong#”, the last account name created (veryverylong1000) contains 16 characters. Therefore, the naming scheme will result in one or more machine names that exceed the maximum of 15 characters. You can indicate whether the sequential values are numbers (0-9) or letters (A-Z): - 0-9. If selected, the specified wildcards resolve to sequential numbers. - A-Z. If selected, the specified wildcards resolve to sequential letters. For example, a naming scheme of PC-Sales-## (with 0-9 selected) results in accounts named PC-Sales-01, PC-Sales-02, PC-Sales-03, and so on. Optionally, you can specify what the account names start with. - If you select 0-9, accounts are named sequentially, starting with the specified numbers. Enter one or more digits, depending on how many wildcards you use in the preceding field. For example, if you use two wildcards, enter two digits or more. - If you select A-Z, accounts are named sequentially, starting with the specified letters. Enter one or more letters, depending on how many wildcards you use in the preceding field. For example, if you use two wildcards, enter two letters or more. Domain credentialsDomain credentials Select Enter credentials and enter user credentials with sufficient permissions to create machine accounts in Active Directory. Note: If the identity type you selected in Machine Identities is Hybrid Azure Active Directory joined, the credentials you enter must have been granted the Write userCertificatepermission. Workspace Environment Management (optional)Workspace Environment Management (optional) This page appears only when you use the Advanced or Premium edition of Citrix DaaS. Select a Workspace Environment Management (WEM) configuration set to which you want to bind the catalog. A configuration set is a logical container used to organize a set of WEM configurations. Binding a catalog to a configuration set lets you use WEM to deliver the best possible workspace experience to your users. Important: - Before you can bind a catalog to a configuration set, you must set up your WEM service deployment. Sign in to Citrix Cloud and then launch the WEM service. For more information, see Get started with Workspace Environment Management service. - If you already use WEM, the machines in the catalog that you are about to provision might already be present in a configuration set, for example, through Active Directory. In that case, we recommend that you use Active Directory consistently to perform the configuration and skip this configuration. If the selected configuration set does not contain settings relating to the basic configuration of WEM, the following option appears: - Apply basic settings to configuration set. The option lets you quickly get started with WEM by applying basic settings to the configuration set. Basic settings include CPU spike protection, auto-preventing CPU spikes, and intelligent CPU optimization. To view the basic settings, click the here link. To modify them, use the WEM console. VDA upgrade (optional)VDA upgrade (optional) Important: - This feature is available as a preview. If you are interested in evaluating it, submit your request through this form. - This feature requires the Citrix VDA Upgrade Agent to work. Installing the agent is an option when you install VDA version 2109 or later, or VDA version 2203 LTSR or later. By default, the agent is not installed. For more information about the VDA Upgrade Agent, see Step 6. Install additional components. - This feature applies to machines that are not created using MCS (for example, physical machines). Select the VDA version to upgrade to. If specified, the VDAs in the catalog that have the VDA Upgrade Agent installed can upgrade to the selected version — immediately or at a scheduled time. Note: - This feature supports upgrading only to the latest VDA. The time at which you create a VDA upgrade schedule or upgrade a VDA determines the latest version of the VDA. - After you configure VDA upgrade settings, it might take up to 15 minutes for the VDA Upgrade field to reflect the latest status. To show the VDA Upgrade column, select Columns to Display in the action bar of Machine Catalogs, select Machine Catalog > VDA Upgrade, and click Save. Choose a VDA track that suits your deployment: Latest CR VDA. Current Releases (CRs) deliver the latest and most innovative app, desktop, and server virtualization features and functionality. Latest LTSR VDA. Long Term Service Releases (LTSRs) are recommended for large enterprise production environments that prefer to keep the same base version for an extended period. After catalog creation, you can upgrade VDAs as needed. For more information, see Upgrade VDAs. If you want to enable VDA upgrade later, you can return to this page by editing the catalog after catalog creation. For more information, see Configure VDA upgrade settings by editing a catalog. Summary, name, and descriptionSummary, name, and description On the Summary page, review the settings you specified. Enter a name and description for the catalog. This information appears in the Full Configuration management interface. When you’re done, select Finish to start the catalog creation. In Machine Catalogs, the new catalog appears with an inline progress bar. To view details of the creation progress: Hover the mouse over the machine catalog. In the tooltip that appears, click View details. A step-by-step progress graph appears where you can see the following: - History of steps - Progress and running time of the current step - Remaining steps Important consideration about setting custom propertiesImportant consideration about setting custom properties Custom properties must be set correctly at New-ProvScheme and Set-ProvScheme in GCP and Azure environments. If you specify non-existing custom property or properties, you get the following error message, and the commands fail to run. Invalid property found: <invalid property>. Ensure that the CustomProperties parameter supports the property. Important consideration about setting ProvScheme parametersImportant consideration about setting ProvScheme parameters When you use MCS to create a catalog, you get an error if you: - Set the following New-ProvSchemeparameters in unsupported hypervisors when you create a machine catalog: Update the following Set-ProvSchemeparameters after you create the machine catalog: CleanOnBoot UseWriteBackCache DedicatedTenancy TenancyType UseFullDiskCloneProvisioning More informationMore information Where to go nextWhere to go next If this is the first catalog created, you are guided to create a delivery group. To review the entire configuration process, see Plan and build a deployment. In this article - Overview - Prepare a master image on the hypervisor or cloud service - Start creating the catalog - Operating system - Machine management - Desktop types (desktop experience) - Master image - Cloud platform and service environments - Machines - NIC (NICs) - Machine accounts - Machine identities - Domain credentials - Workspace Environment Management (optional) - VDA upgrade (optional) - Summary, name, and description - Important consideration about setting custom properties - Important consideration about setting ProvScheme parameters - More information - Where to go next
https://docs.citrix.com/en-us/citrix-daas/install-configure/machine-catalogs-create.html
2022-05-16T15:59:35
CC-MAIN-2022-21
1652662510138.6
[]
docs.citrix.com
- Login to Bugzilla - Click the “Administration” link in the header or footer - Click “Field Values” link - Click “Status” field - Click “Add” link at the bottom - A new page to add a new value for the “Status” field will open. - Enter “Value”- required - Enter “Sortkey” – required - “Status” can be open or closed (closed Status- requires a resolution) Note: The open/close attribute can only be set once, when you create the status. It cannot be edited later - Click on “Add” link
https://docs.devzing.com/bugzilla-adding-field-values-status/
2022-05-16T14:48:04
CC-MAIN-2022-21
1652662510138.6
[]
docs.devzing.com
How do you delete an attachment on a Bugzilla entry? Good question! Attachments in Bugzilla are used to help document a bug. Whether the document you’ve attached was the wrong screenshot, or was attached to the wrong bug record, was uploaded in error (a picture of you and friends out for some Thai food), or perhaps something an employee maliciously wanted to put in to get back at you, you have the wrong document. Time to delete. What do you do? The answer is just a bit different for the administrator than for the non-administrator. Administrators Administrators may easily delete a bug attachment. First give yourself the ability to delete attachments. - From the Main Page… Administration -> Parameters. Select “Attachments” on the left-hand table. - Scroll down just a bit to “allow_attachment_deletion”. - Turn this feature “on”. - Find the bug with the wayward attachment and click the Details link on the right side of the attachment. - Below the Comment box you will see a link to Delete the attachment. - Click the link (you may enter a reason for deletion if you want) and click Yes, delete. Your attachment is gone. Non-Administrators As a non-administrator you can’t really delete an attachment; all you can do is mark it “Obsolete.” This really only hides the attachment unless you click the Show Obsolete button, but it does keep it from being front and center. - Open the bug. (Go to the “Bug List” page, find the “Summary” of the bug and click the link. This opens the bug.) Information about the attachment appears at the bottom of the bug page. - Click the details link next to the attachment. - Then click edit details to show the obsolete check box. - Select the “Obsolete” check box. - Click Submit to save your changes.
https://docs.devzing.com/bugzilla-deleting-an-attachment/
2022-05-16T14:53:55
CC-MAIN-2022-21
1652662510138.6
[]
docs.devzing.com
Security Genesys Co-browse is part of a solution deployment, and security should be considered at the solution level. For example, Genesys Co-browse takes measures to make sure hidden attacks in DOM do not make it to agent desktops. Meanwhile, you must consider other areas, like only exposing Genesys Co-browse on HTTPs ports, hardening intermediate proxies so as to suppress or add certain HTTP headers, and so on. The Open Web Application Security Project provides excellent guidelines to help. Genesys Co-browse supports the following ways to protect data over the web: - Encryption of co-browsing data—Co-browsing related data passed between the user, the Co-browse Server and the agent is encrypted through the HTTPS connection: - Configure Security Certificate. - Configuring Cipher Suites—To configure specific cipher suites to include or exclude, see the Disabling/Enabling Specific Cipher Suites section of the Jetty TLS documentation. - HTTPS connection for Jetty—A Co-browse Server application defined in Configuration Server can have both HTTP and HTTPS connections for Jetty supplied with Co-browse. Related documentation: - Add the secure port section in Creating the Co-browse Server Application Object in Genesys Administrator - Specify the url option in the cobrowse section in Configuring Workspace Desktop Edition to allow the Plug-in to work with co-browsing. - - Security with External Cassandra—Starting from 8.5.1, Genesys Co-browse supports secure access interfaces through authentication and authorization and secure network traffic through TLS. Related documentation: Cassandra Security - section allows you to list all the domains resources of which are allowed to be proxied through Co-browse server. Use this to prevent unauthorized parties from abusing the Co-browse server proxy. - The disableCaching option in the August 11, 2020, at 17:56.
https://docs.genesys.com/Documentation/GCB/latest/Deployment/Co-browseSecurity
2022-05-16T16:05:23
CC-MAIN-2022-21
1652662510138.6
[]
docs.genesys.com
First StepsTime to read: 10 minutes: - Navigation of PepperShop Administration (main and sub menus) - Input field to browse navigation / button at the top: collapse left navigation - Shopping cart = change to customer page, floating ring = help for current page content Check administration password protection First of all you have to make sure that the shop administration is protected with a password. Therefore, first go to the shop administration. To do this, add the term shop/Admin/ to your existing shop URL (if there is an index.php extension, delete it first). Example: - please note that this is case-sensitive. Check if a reddish bar appears at the top with a warning that a password has not yet been set. If there is a corresponding message, please follow the displayed instructions and save the administration. Verification of system compatibility If the shop does not run on a hosting system of the PepperShop manufacturer Glarotech: You should now first switch to the main menu of the shop administration of your PepperShop. Go first to the point ‘Shop settings’ > ‘Shop configuration’. Here you should have a look at the diagnostic output. Red dots indicate problems. Further information on open questions about the displayed diagnostic outputs can be found in the PepperShop Forums or in the FAQ list. Initial configuration of the shop system Afterwards you should go through the following menus in the Shop product characteristics (variants) for parent products and sub products2. - Delivery countries and shipping - For each country group you can specify payment methods, activation, fees and shipping methods. For each type of shipment their costs depending on the type of billing. - currency settings - Activate other currencies here and enter the exchange rate. - Languages - The PepperShop can be operated in several languages. Further language sets are offered here: Layout settings Open the section Layout Settings in the main menu. You will find the following subcategories: - Themes / Layout Management - Here you choose a theme and adjust the design 3. Individual implementations are also possible. Ideas can be picked up at the live shops: - Upload shop buttons (buttons) and pictures (background & shop logo)4 - Most shop buttons are now created via CSS and dynamically rendered via layout management. However, some graphics are still used in the system. These can be replaced here with your own files – e.g. stock indicators. - Image upload - Background images and the shop logo are uploaded here. Entry of the assortment Category management First create the categories in the category management. This is where the products are later classified. The pre-installed demo categories can be easily deleted / ignored. The desired action can be selected by calling up the category. Products Now the products are entered or imported, e.g. via ‘Product’ > ‘Add new product’ or under ‘Import’ / ‘Export’ > ‘Import / Export Tool’. An example assortment is available after a new installation. Tip: If you are using the import/export tool, you should first create an export so that all column captions are known. Please pay attention to the date formats (US / DE)5. If you are working with variants per article, you should decide early whether you want to use the easy-to-administer standard variants of the shop, but with which no export, no multilingual variants and no stock management per variant are possible. Alternatively, you can use the parent and sub product to use the characteristics in the shop without the disadvantages mentioned. Details: Parent/Subproduct PDF Guide. The switch for the variants can be found in ‘Shop settings’ > ‘General shop settings’ > ‘Edit products’.. The PepperShop is Trusted Shops pre-certified. You are faster on the market, receive discounted conditions and benefit from special discounts. If you are located in Germany or supply to the German market, your e-commerce system must also comply with the legal provisions applicable in the target market (this provision does not apply only to Germany). We have prepared a special preconfiguration for Germany: Important settings for Germany (company headquarters or delivery address) General shop settings - shop configuration - Order overview Display → on the order completion page - Show article pictures in shopping cart - Edit product - Product price additional advertisement → ☑ Display VAT information → ☑ Show shipping info → ☑ Use basic price - Customer info/revocation/general terms and conditions - product names describe the essential characteristics of an article! Via Own Contents - Edit navigation – Create new entry – Page with content – Type = Page with content – Page = Shipping and payment insert the entry (e.g. with Infolinks). Search engine optimization (SEO) The PepperShop offers automatic search engine optimization for all shop data, especially categories and products, as well as static pages. Thus products are optimally prepared for search engines. But the most important thing is that you register your shop with all relevant search engines to generate more traffic. For Google, the login URL is as follows (you need to create a free Webmaster Tools account): After all products are registered, one should in Product > Bulk management on the button so that search engine optimization uses all products (all products that do not yet have a virtual file name are given a edited name here). If you like, you can also create a sitemap6 (product > bulk management > sitemap button) and link it e.g. at the Google Webmaster-Tools; this is highly recommended. Analyze, improve Check which products work well, which you still miss in the shop and where the visitors move in the shop (and leave it again). The PepperShop offers various options for this: - Show turnover (per article / over the whole shop / in a time window): Go to the shop administration and select ‘Customers/Orders’ > ‘Article Order Search’. - Search Analysis Module: This module shows you what your customers are looking for in the shop and how many results the shop has shown you. So here you can see where the trends are heading and what you are still missing from the range. - Google Analytics Module: With the Google Analytics Module, the PepperShop offers a very extensive integration of the shop with over 300 different information data for the Google Analytics analysis software provided by Google free of charge. With this tool you see the visitor flows in the shop and can realize extensive reports. It is very important to keep an eye on where visitors leave the shop the most. From this it can be concluded whether, for example, marketing campaigns address the wrong prospective customers (jump always already in the article catalogue) or whether relevant payment methods are missing (jump always in the cash register). Enterprise Version Interfaces If the Enterprise version is used, all current interface descriptions (processes, CSV data transfer, XML order export, …) can be downloaded from the following address: Your first order has arrived? We have written instructions on how to process incoming orders and what to pay attention to. Get the latest information The e-commerce trade takes place in a rapidly changing environment. New technologies are constantly needed or presupposed or there are new legal framework conditions. We recommend the following two steps: - Register for the PepperShop Newsletter - Read the PepperShop Blog or subscribe to the RSS-Feed of it - Exciting are also always contributions on the PepperShop Facebook or Google+ page or the news service Twitter messages. Your PepperShop grows with you As time goes by, you will make further demands on your e-commerce solution, the PepperShop grows with it. Expand your shop with an ERP connection or with modules: - Buying customers: - Glarotech hosting customers: - Cash solution: In contrast to other shop solutions, the PepperShop also offers a project business. Have your PepperShop reprogrammed according to your needs in order to optimally integrate processes or to work more productively with new interfaces thanks to automation. Just ask: [email protected]. Support - for questions about the PepperShop often a click on ‘Show help’ or ‘Help archive’ helps. It is also possible that a specific manual has been created for your request, see PepperShop manuals. - answers to frequently asked questions can be found in the FAQ (frequently asked questions), for example the system requirements. These can be accessed directly via ‘Help & News’ > ‘Help Archive’. - probably your question has already been answered in the extensive PepperShop Forums. - many instructions are available for download. - you may already find a solution described in the Shopsystem Feature List for a particular functionality you are looking for, or it may be available as an extension module in the PepperShop Sales System - If you need direct support via phone or e-mail, Support Packages are available. Note: Modules are not available for the basic version of the PepperShop. Until mid August 2016 PepperShop still called itself PhPepperShop. Except for the name, which is now a bit more catchy and the technology PHP no longer carries as an acronym in the name, everything remains the same. Also in the new logo a pepper appears schematically as a symbol for a sharp software. ↩︎ Realize variants as own articles, among other things with own stock: Manual Parent / Subarticle. ↩︎ See also the following design documents: Design possibilities ↩︎ The PepperShop automatically converts the image size. Settings, see Layout Management ↩︎ We recommend not to use the fields ‘Active from, Active to, Action from, Action to’ for new entries. The reason for this is that the PepperShop requires an American date format: (DDDD-MM-YY HH:MM: SS) - the programs Excel/OpenOffice.org adapt this however fully automatically to the formats used by the user and supply then invalid values. This can be avoided by selecting the ‘Text’ type as the number type in the respective column. ↩︎ Shop administration: “Articles” > “Mass Mutations” > “Sitemap” ↩︎
https://docs.peppershop.com/latest/en/first-steps/
2022-05-16T14:40:53
CC-MAIN-2022-21
1652662510138.6
[]
docs.peppershop.com
Welcome to PyUnity’s documentation!¶ Version 0.8.4 (in development)¶ PyUnity is a pure Python 3D Game Engine that was inspired by the structure of the Unity Game Engine. This does not mean that PyUnity are bindings for the UnityEngine. However, this project has been made to facilitate any programmer, beginner or advanced, novice or veteran. Disclaimer¶ As we have said above, this is not a set of bindings for the UnityEngine, but a pure Python library to aid in making 3D games in Python. Installing¶ To install PyUnity for Linux distributions based on Ubuntu or Debian, use: > pip3 install pyunity To install PyUnity for other operating systems, use pip: > pip install pyunity Alternatively, you can clone the repository to build the package from source. The latest version is on the master branch and you can build as follows: > git clone > git checkout master > python setup.py install The latest builds are on the develop branch which is the default branch. These builds are sometimes broken, so use at your own risk. > git clone > python setup.py install Its only dependencies are PyOpenGL, PySDL2, GLFW, Pillow and PyGLM. Microsoft Visual C++ Build Tools are required on Windows for building yourself. Links¶ For more information check out the API Documentation. If you would like to contribute, please first see the contributing guidelines, check out the latest issues and make a pull request.
https://docs.pyunity.x10.bz/en/latest/index.html
2022-05-16T16:08:37
CC-MAIN-2022-21
1652662510138.6
[]
docs.pyunity.x10.bz
Option Actions VNgen features a simple, yet robust options menu system which can be used for both in-game dialog choices and your main menus themselves. Taking action based on which choice is selected might seem complex at first, but the open-ended nature of VNgen options is precisely what makes them so powerful. Simply put, there is no predetermined outcome when a user selects a given option. Rather, that option's ID is stored in memory where it can be read and used in conditional statements like if and switch to execute code of your own based on the result. Check out the Getting Started guide for a full breakdown of this process. In this section we'll examine available options functions so you'll be off to creating interactive dialogs in no time!
https://docs.xgasoft.com/vngen/reference-guide/actions/options/intro-options/
2022-05-16T15:16:26
CC-MAIN-2022-21
1652662510138.6
[]
docs.xgasoft.com
What is an ASV file? An ASV file is an Adobe Photoshop Selective Color settings file. It contains color settings for CMYK (Cyan, magenta, yellow, and black) values presets that can be applied to raster images such as PNG and BMP. ASV files can be exported and saved for sharing with other users over the internet. These can be loaded in Adobe Photoshop through the Selective Color dialog by using the Image->Adjustments->Selective Color option. Once loaded, these presets can be modified as well using Adobe Photoshop. ASV files can be loaded using relative or absolute method. ASV File Format - More Information ASV files are saved in binary file format and their contents are not in human readable form.
https://docs.fileformat.com/system/asv/
2022-05-16T14:48:05
CC-MAIN-2022-21
1652662510138.6
[]
docs.fileformat.com
The Infoblox 2205 Series are 2-U appliances that you can efficiently mount in a standard equipment rack. For rack mounting information, see Rack Mounting Procedures. Front Panel Infoblox 2205 Series front panel components include the LCD (liquid crystal display) panel and navigation buttons, communication ports, and hard disk drives, as shown in Figure 1 and described in Table 1. The hard disk drives are concealed under a removable drive bay door. You must remove the door to access the hard disk drives, as shown in Figure 1. For explanations of Ethernet port LEDs, and console and Ethernet port connector pin assignments, see Ethernet Port LEDs and Interface Connector Pin Assignments. Table 1 describes the Infoblox 2205 Series front panel components. Figure 1 Infoblox 2205 Series, Front View without the Drive Bay Door Table 1 Front Panel Components CC and FIPS for TE-2225 and TE-V2225 The Trinzic TE-2225 and TE-V2225 appliances can be made compliant with CC and FIPS 140-2 security standards. Both CC and FIPS give assurance that the product satisfies a set of internationally recognized security measures. CC is a set of rules and specifications to evaluate the security of Information Technology (IT) products. FIPS is a U.S government computer security standard that is designed to validate product modules that use cryptography. This is necessary to maintain the integrity and confidentiality of the end-user information that is stored, processed, and transferred by the product module. To ensure that your appliance is CC and FIPS compliant, make sure that your hardware and software settings match the evaluated configuration that was certified for both CC and FIPS. For information about how to configure CC and FIPS, refer to the Infoblox NIOS Administrator Guide. Infoblox provides tamper evident FIPS labels that you must affix on the HDD cover, all PSU and fan canisters, over the IPMI port of the appliance to make it FIPS compliant. You must install the FIPS tamper evident labels correctly onto the device for compliance with FIPS. This label is valid for Trinzic TE-2225 appliances only. Note that these labels are not required for CC. Clean the chassis before affixing tamper evident FIPS labels. Apply these labels as shown in the figures below: FIPS label Install a sticker on the drive bay cover as shown in the picture Install a sticker on both of the back corners of the top cover as shown in this picture Install stickers for each of the fans into the chassis as shown in this picture Install a sticker for each power supply module as shown in this picture Install a sticker covering the IPMI port as shown in this picture Ethernet Port LEDs To see the link activity and connection speed of an Ethernet port, you can look at its Activity and Link LEDs. Figure 2 shows the status the LEDs convey through their color and illumination (steady glow or blinking). Figure 2 Ethernet Port LEDs (inc. SFP+ interfaces where noted) SFP/SFP+ Interface Support All models in the Infoblox 2205 Series support optional interfaces to accept SFP transceiver modules, for 1GbE optical connectivity. Table 2 summarizes SFP and SFP+ support for appliance models in the Infoblox 2205 Series. Note: You cannot add SFP/SFP+ support after you have purchased an appliance model that does not have the SFP/SFP+ interfaces pre-installed. Contact your Infoblox representatives if you are interested in purchasing appliances that support SFP/SFP+ interfaces. To support connectivity to 10 Gigabit networking infrastructure, Infoblox also offers versions of the Trinzic TE-2215, TE-2225, Network Insight ND-2205 and Trinzic Reporting TR-2205 that provide 10-Gigabit Ethernet (10GbE) interfaces accepting SFP+ transceiver modules, for 10GbE RJ-45 copper or optical connectivity. The Trinzic TE-2215 and Trinzic TE-2225 appliances support four active 10GbE interfaces in the optional 1GbE SFP and 10GbE SFP+ configurations. Other appliances in the Infoblox 2205 Series, comprising the ND-2205 and TR-2205, support three active interfaces in the optional 1GbE SFP and 10GbE SFP+ configurations. The port designated HA for these three models is inactive for these appliances. Order of ports from left to right is otherwise the same. The Advanced Appliance PT-2205 supports accelerated 10GbE connectivity in a factory-only configuration, and supports HA. In optional configurations for the Infoblox 2205 Series (any appliance that does not use the internal Ethernet ports), the Infoblox 1GbE SFP or 10GbE SFP+ ports replace the functionality in the original system MGMT, LAN1, HA and LAN2 ports, thereby disabling the built-in MGMT, LAN1, HA and LAN2 ports. 10GbE support accepts Infoblox-provided SFP+ 10GbE Short Range and Long Range transceivers, Cisco SFP+ Direct Attach 10GSFP+Cu, or HP HPJ9283B SFP+ Direct Attach 10GSFP+Cu transceivers. You may mix media types in the set of ports (e.g., one copper SFP in the MGMT port and two or three fiber SFPs). SFP and SFP+ transceivers also may be used in a mixed configuration in a 4-Port 10GbE system. One example involves installing 10GbE SR SFP+ transceivers in the LAN1 and LAN2 ports for the Trinzic TE-2215 or TE-2225 appliance, and installing 1GbE SFP copper transceivers in the MGMT and HA interfaces. Note: For ND-2205 and TR-2215 models configured with 1GbE SFP or 10GbE SFP+ interfaces, the HA port is reserved for future use and cannot be used for network applications. Order of ports from left to right is otherwise the same. Table 2 SFP/SFP+ Interfaces Support Summary 1 – With optional 1GbE or 10GbE line card. Disables internal RJ-45 ports. 2 – Uses 1GbE or 10GbE hardware acceleration for DNS security threats targeting DNS caching and authoritative applications. 3 – Only in appliance configurations with optional SFP/SFP+ ports. See the section Field Replaceable Units for specific information on part numbers, availability, and device compatibility. Interface Connector Pin Assignments An Infoblox Infoblox 2205 Series appliance has three types of ports on its front panel: - USB port (reserved for future use) - Male DB-9 console port - RJ-45 10Base-T/100Base-T/1000Base-T auto-sensing gigabit Ethernet ports Figure 3 describes DB-9 and RJ-45 connector pin assignments. The DB-9 pin assignments follow the EIA232 standard. To make a serial connection from your management system to the console port, you can use an RJ-45 rollover cable and two female RJ-45-to-female DB-9 adapters, or a female DB-9-to-female DB-9 null modem cable. The RJ-45 pin assignments follow IEEE 802.3 specifications. All Infoblox Ethernet ports are auto-sensing and automatically adjust to standard straight-through and cross-over Ethernet cables. Figure 3 DB-9 Console Port and RJ-45 Port Pinouts Appliance Rear Panel The Infoblox 2205 Series appliances ship with dual AC power supplies and six fan modules. The power supplies and fan modules are field replaceable. The power supplies are also hot-swappable so you can replace any one of them at a time without disrupting the operations of the appliance. Figure 4 Infoblox 2205 Series, Rear View Table 3 Rear Panel Components This page has no comments.
https://docs.infoblox.com/pages/viewpage.action?pageId=67084476&navigatingVersions=true
2022-05-16T16:14:02
CC-MAIN-2022-21
1652662510138.6
[]
docs.infoblox.com
Scripts is located under My Calendar > Design > Scripts. Disabling scripts will break calendar interactions. This feature is intended for advanced users who wish to provide their own custom scripting. Script Manager Insert scripts on these pages (comma separated post IDs) Find the ID for the WordPress page the calendar is on and add it into the field. This will restrict scripts from loading on other pages. Disable Grid JS Disable the event popup in the calendar and automatically show the popup containing an event. One can not click the red circle with an x in it to close it. Disable List JS Show the List but the dates in the list are not clickable. (The + sign is not seen.) Disable Mini JS Mini calendar. Disables popup interactions in the mini calendar view. Disable AJAX Navigating the calendar (e.g., previous/next links, selecting a month, etc.), requires a new page load rather than dynamically navigating within the same page.
https://docs.joedolson.com/my-calendar/category/design/scripts/
2022-05-16T14:17:33
CC-MAIN-2022-21
1652662510138.6
[]
docs.joedolson.com
The post-upgrade activities include the post-upgrade testing cycle and cleanup. To finalize the upgrade: Perform the full verification cycle of your MCP OpenStack deployment. Verify that the following variables are set in the classes/cluster/<cluster_name>/infra/init.yml file: parameters: _param: openstack_upgrade_enabled: false openstack_version: pike openstack_old_version: ocata Refresh pillars: salt '*' saltutil.refresh_pillar Remove the test workloads/monitoring. Remove the upgrade leftovers that were created by applying the <app>.upgrade.post state: Log in to the Salt Master node. Get the list of all upgradable OpenStack components. For example: salt cfg01*' ' ' Note upgradable OpenStack components with the lists of installed applications for each target node. Apply the following states to each target node for each installed application. Before running the script, verify that you define the $cluster_domain and $salt_master_hostname variables. #!/bin/bash #List of formulas that implements upgrade API sorted by priority all_formulas=$(salt cfg01* config.get orchestration:upgrade:applications --out=json | \ jq '.[] | . as $in | keys_unsorted | map ({"key": ., "priority": $in[.].priority}) | sort_by(.priority) | map(.key | [(.)]) | add' | \ sed -e 's/"//g' -e 's/,//g' -e 's/\[//g' -e 's/\]//g') #List of nodes in cloud list_nodes=`salt-key | grep $cluster_domain | grep -v $salt_master_hostname | Set the following variables in classes/cluster/<cluster_name>/infra/init.yml: parameters: _param: openstack_upgrade_enabled: false openstack_version: pike openstack_old_version: pike ... Refresh pillars: salt '*' saltutil.refresh_pillar
https://docs.mirantis.com/mcp/q4-18/mcp-operations-guide/update-upgrade/major-upgrade/upgrade-openstack/os-ocata-pike-upgrade-detailed/post-upgrade-o-p.html
2022-05-16T15:31:02
CC-MAIN-2022-21
1652662510138.6
[]
docs.mirantis.com
dimod.generators.ran_r¶ - ran_r(r: int, graph: Union[int, Tuple[Collection[Hashable], Collection[Tuple[Hashable, Hashable]]], Collection[Tuple[Hashable, Hashable]], networkx.classes.graph.Graph], cls: None = None, seed: Optional[int] = None) dimod.binary.binary_quadratic_model.BinaryQuadraticModel [source]¶ Generate an Ising model for a RANr problem. In RANr problems all linear biases are zero and quadratic values are uniformly selected integers between -rto r, excluding zero. This class of problems is relevant for binary quadratic models (BQM) with spin variables (Ising models). This generator of RANr problems follows the definition in [Kin2015]. - Parameters r – Order of the RANr problem. graph – Graph to build the BQM on. Either an integer, n, interpreted as a complete graph of size n, a nodes/edges pair, a list of edges or a NetworkX graph. cls – Deprecated. Does nothing. seed – Random seed. - Returns A binary quadratic model. Examples: >>> import networkx as nx >>> K_7 = nx.complete_graph(7) >>> bqm = dimod.generators.random.ran_r(1, K_7) >>> max(bqm.quadratic.values()) == -min(bqm.quadratic.values()) True - Kin2015 James King, Sheir Yarkoni, Mayssam M. Nevisi, Jeremy P. Hilton, Catherine C. McGeoch. Benchmarking a quantum annealing processor with the time-to-target metric. Deprecated since version 0.10.13: The clskeyword argument will be removed in 0.12.0. It currently does nothing.
https://docs.ocean.dwavesys.com/en/latest/docs_dimod/reference/generated/dimod.generators.ran_r.html
2022-05-16T15:36:39
CC-MAIN-2022-21
1652662510138.6
[]
docs.ocean.dwavesys.com
Step inside the all powerful M1A2 Abrams, Merkava Mk. 4, the Leopard 2A7 or the T-90 Battle Tank! Replicated, with sounds and particles! 🎡 More tanks 🎡 Playable Demo - M1A2 Playable Demo - T90 Playable Demo - Leopard Playable Demo - Merkava 🎥 Overview Video 🎥 📚GIF Examples - M1A2 Abrams📚 📚GIF Examples - T90-A MBT📚 📚GIF Examples - Leopard 2A7📚 📚GIF Examples - Merkava Mk. 4📚 This asset features a fully drivable tank controller, with rigged tracks, wheels, hydraulics, and more! It is possible to aim and target the cannon and machine-gun separately, with sounds and visual effects included! Four texture variations are included. These tanks are sure to be combat-ready, no matter the environment, with desert, forest, and snow camo variants! These features are all multiplayer-ready. All inputs are configured, simply launch the project and press play, to be in control of your own tank. It is easy to modify the configuration of the blueprint, for more arcadey or more realistic driving, the choice is yours! In collaboration with Gerhald3D. Plugin Controller Documentation: Enhanced Vehicle Plugin Integration Discord - Need assistance? Send me a message! v1.0 - released v1.01 - 10/01/2022 v1.02 - 16/01/2022 v1.03 - 19/01/2022 v1.04 - 02/02/2022 v1.05 – 07/03/2022 v1.06 - 28/03/2022 An advanced tank controller is included, that includes: Additional Features Support for the Offworld Defense Simulations tank plugin is also included. Just enable the plugin, install the bonus files, and enjoy the high-quality track, wheel, and hydraulic suspension! (You do not need this plugin to use this tank, this is included as an alternative vehicle controller or for those already using this system) Technical Features: Technical Details: LODs: Yes Number of Blueprints: 30 Network Replicated: (Yes)
https://docs.unrealengine.com/marketplace/zh-CN/product/modern-tanks-advanced-tank-blueprint-four-pack
2022-05-16T14:56:01
CC-MAIN-2022-21
1652662510138.6
[]
docs.unrealengine.com
Find Samples on GitHub¶ Here’s a list of what you’ll find on GitHub. Interactive Data Analysis Samples and Tools¶ - getting-started-bigquery - Example queries to show how to get started with genomic data in BigQuery. - codelabs - Codelabs demonstrating usage of several tools and systems on genomic data. - bigquery-examples - Advanced BigQuery examples on genomic data. - datalab-examples - Example Google Cloud Datalab iPython Notebooks for genomics use cases. - bioconductor-workshop-r - R package containing instructional materials for using GoogleGenomics Bioconductor and bigrquery packages. - api-client-r - An R package for Google Genomics API queries. - gatk-tools-java - Tools for using Picard and GATK with the Google Genomics API. - beacon-go - AppEngine implementation of the Beacon API from the Global Alliance for Genomics and Health written in Go. Cluster Computing Data Analysis Samples¶ - dataflow-java - Google Cloud Dataflow pipelines such as Identity-By-State as well as useful utility classes. - spark-examples - Apache Spark jobs such as Principal Coordinate Analysis. - grid-computing-tools - Scripts and samples for using Grid Engine on Google Cloud Platform. Working with the Google Genomics API¶ - getting-started-with-the-api - Examples of how to get started with the Google Genomics API in many languages. - utils-java - Common Java files for Google Genomics integrations. Data Visualization Application Samples¶ - api-client-javascript - A simple web application that demonstrates how javascript can be used to fetch data from the Google Genomics APIs. - api-client-android - A sample Android app that calls the Google Genomics API. - api-client-python - Google AppEngine implementation of a simple genome browser that pulls data from the Google Genomics API. - api-client-r - An R package for Google Genomics API queries. - See the Shiny example. - See the ggbio example. Data Analysis Application Samples¶ - denovo-variant-caller-java - A de novo variant caller which uses information from a mother, father and child trio with a Bayesian inference method. - linkage-disequilibrium - A suite of Java tools to calculate linkage disequilibrium between variants and load the results into BigQuery and BigTable. Miscellaneous¶ - gce-images - Scripts that create Google Compute Engine images and Docker containers with popular genomics software installed.
https://googlegenomics.readthedocs.io/en/staging-2/github_index.html
2022-05-16T15:38:46
CC-MAIN-2022-21
1652662510138.6
[]
googlegenomics.readthedocs.io
ACID ingest patterns Understanding Hive ACID ingest patterns helps you adopt one that fits best. You gain an understanding of how to build a pipeline that keeps the original data and builds or updates a more efficient table for recurring READ operations. Although Hive handles compactions, micro-batching appends, and Hive streaming writes, you still have to avoid inserting small records into ACID tables. Using ACID does not correct a bad ingest design. If you perform micro-inserts and create many delta directories, at some point the compaction system, and other components, such as NameNodes, have to reconcile the delta files. Eventually, compaction consolidates files, but if you have hundreds of these delta files before compaction even starts, Hive needs to work hard in the background. Heavy compute resources and metastore resources are needed. The data you ingest into ACID tables using the following pattern must be of a reasonable size. ACID pattern 1 ACID pattern 1 characteristics are: Handles compactions in the background, but you still have to understand the impact of deltas Performs well for micro batch appends and Hive streaming Works best when you partition on business need, not ingestion Frequently partitioned to optimize file size and access (pruning) Supports adding a batch-id field to record ingest events Not designed for online transaction processing (OLTP) Consider how quickly you need to access the data. If you need immediate access to data, look at how many queries your organization actually issues to access data received within the last 5 minutes, for example. You might realize that rarely do you need access your data so quickly, but if not, consider other technologies, such as Impala with Kudu or HBase. If you need to track batch operations, for example by associating a batch ID with every record, add a second-level partition element. Add a batch field to the table with the batch ID. You can perform delete operations against ACID tables to remove a batch and replay it. ORC and CRUD functionality repair that table based on a replay or removal of an insert. Hive does not satisfy OLTP requirements. ACID pattern 2 ACID pattern 2 has the following characteristics: - Designed to be business-, not ingest-centric. - Supports highly granular partitions, for example YY-MM-DD-HH vs YY-MM. - Achieves efficient content size per partition to reduce file counts. Beware of dynamic partitions and avoid cross partition distribution of data. If your design requires Hive to cross partitions unnecessarily when you insert data into a table, collapse year-month-day-hour partitions down to year-month if possible. ACID pattern 3 To build a pipeline that keeps the original data and builds or updates a more efficient table for recurring READ operations, use the ACID pattern 3. ACID pattern 3 represents the sweep process. This pattern keeps track of historical changes. You can lose information that has business value if you do not keep track the original transactional elements. Consider having an ingest table with original values and also a change data capture (CDC) table. For example, if you have 2 million customers making thousands of changes a day to an ACID table target, you lose all the thrashing that might have happened if you do not capture changes. Using a sweep process not only consolidates files to eleviate ACID performance problems, but also supports data change analytics. If a consumer changes their address frequently, say 50 times a day, perhaps fraud is indicated. The cost of space you need for historical data if often worth the expense. A portion of the sweep pattern, shown below, looks similar to the classic ingest pattern. You use a non-acid table with partitions. Instead of inserting data into a non-ACID table every 15 minutes, for example, you instead sweep data from the ingest table into the ACID table every hour or two. You use the ACID table as your consumer table, which has collapsed partitions. Hive performs compaction on the ACID table. Another approach is to use an ACID table as the staging place for ingesting data or other data pipelines for writing and aggregating data. Turn off auto-compaction, or raise thresholds, to enjoy tranaction isolation of your streaming with no overhead. In summary, use ACID pattern 3 as follows: - Use a non-ACID table to stage micro batches to a final ACID table. - Use more aggregate partition strategy (if required) on the final table (YYMM vs. YYMMDD_HH). - Allow ACID to undergo compaction. - Direct consumers to the final ACID table for better performance and less system resource overhead. - Include a batch ID in the schema to support rollback. ACID CDC pattern The ACID CDC pattern has the following characteristics: - Used for the initial seed of new records. - Used for follow-up updates and inserts. - Used for follow-up deletes. - ACID deltas track updates and deletes, rolled up in SELECT. - Relys on major compaction to reconcile updates and deletes. For changing dimension tables, the ACID CDC pattern supports a large partition, late arriving data, or perhaps no partition at all. Using the ACID CDC pattern, you might have just a consumer table. Inserts occur, and then updates to the inserts. All changes are captured as delta records. A major compaction reconciles inserts and updates to give you a new base. The next update creates another delta in this case. When you insert into an ACID table, you must deduplicate the records before making insertions, or deduplicate the records before updates. You cannot skip the deduplication process because there is no enforceable constraint primary key that takes care of eliminating duplicate copies of data. If you put a record in a transactional table three times before it is pushed into an ACID table, the record comes into Hive three times: first as an insert, next as an update, and finally as another update before you push your changes into the ACID table through the merge process. You must reconcile inserts and updates before the merge to make sure you get only one operation; otherwise, you get duplicate records.
https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/migrate-hive-workloads/topics/hive-acid-patterns.html
2022-05-16T14:50:17
CC-MAIN-2022-21
1652662510138.6
[]
docs.cloudera.com
+ Add Copy is located under My Calendar > Add Event > + Add Copy. In the following screenshot we can see the original event and two copies. When clicking the + Add Copy a new copy of the event will automatically be added containing the same Start and End time as in the original event. Make the needed adjustments. Click the bottom + Add Copy to add additional copies of the event. Above we can see that I have added two copies (duplicates). Using the same time but changing the dates. Clicking to Cancel an Event Copy will give the option to “Restore” (keeping the event copy) or “Remove” (deleting the copy). This is a multi-day event. Will show multiple events as one event overview on the frontend. If used in relation with Events widget/shortcode or Today’s Events widget/shortcode. When the rest of the event information has been filled out and you decide to Save Draft or Publish. Here is an example of saving the above original event and the additional two copies. A total of three events are now published. Here we can choose to View Event (frontend) or Edit Event. Or we might click the Events link seen in the left WordPress menu. The following documentation shows an example with using + Add Copy and Repetition Pattern.
https://docs.joedolson.com/my-calendar/category/events/add-copy/
2022-05-16T14:32:51
CC-MAIN-2022-21
1652662510138.6
[]
docs.joedolson.com
Links may not function; however, this content may be relevant to outdated versions of the product. Modifying unsupported browser lists and error messages Pega 7.1.9 and later versions support Google Chrome, Apple Safari, Mozilla Firefox, and Internet Explorer versions 9 and later. Application users not using a supported browser need to upgrade to a supported browser. If you use applications that are rendered in quirks mode, which enables the Pega 7 Platform to correctly display and render non-HTML5 standard user interfaces, you must update them to standards-based HTML5 user interfaces when you upgrade to Pega 7.1.9. Additional checks have also been introduced in Pega 7.1.9 to determine whether application users are using a supported browser. Checks are performed on every harness rendering and use a when rule to control which browsers are supported. To find the list of supported browsers, see the Platform Support Guide. You can identify which components of your application user interface are not HTML5 standards-based by clicking. Modifying the list of supported browsers You can modify the restriction on unsupported browsers by editing the pyUnsupportedBrowsers rule in your application ruleset. - Open the pyUnsupportedBrowsers rule. The default expressions display the unsupported browser message for Internet Explorer versions 5 through 8. - Edit the Logic string to select which browsers receive the unsupported browser message. - Save the rule. The pyUnsupportedBrowsers Advanced tab Editing the unsupported browsers message When users log in to a Pega 7 Platform application from an unsupported browser, they see the following message: The unsupported browser message You can edit the content of this message to better reflect your application's browser support. - Open the pyUnsupportedBrowserLoginMessage rule. - Locate the message text within the HTML code. - Edit the text to reflect your application's browser requirements. - Save the rule. Disabling Compatibility View settings When enabled in Internet Explorer versions 9, 10, and 11, the Compatibility View settings effectively change the user agent of the browser to Internet Explorer 7. As a result, the Pega 7 Platform application detects that the browser is unsupported. To avoid this issue, disable the Compatibility View settings by clickingin Internet Explorer.
https://docs.pega.com/modifying-unsupported-browser-lists-and-error-messages
2022-05-16T16:35:22
CC-MAIN-2022-21
1652662510138.6
[]
docs.pega.com
How to secure and harden your Splunk platform instance Use this checklist as a roadmap to help you secure your Splunk platform installation and protect your data. Set up authenticated users and manage user access on the Splunk platform You can harden a Splunk platform deployment by carefully managing who can access the deployment at a given time. - Set up users and configure roles and capabilities to control user access. See About configuring role-based user access. - Configure user authentication with one of the following methods: - The native Splunk authentication scheme. See Set up user authentication with Splunk's built-in system. - Splunk platform authentication tokens, which are based on the native authentication scheme. Tokens let you provide access to the instance through web requests to Representational State Transfer (REST) endpoints. See Set up authentication with tokens. - The Lightweight Directory Access Protocol (LDAP) authentication scheme. See Set up user authentication with LDAP. - Single Sign on with Security Assertion Markup Language Additional hardening options for Splunk Enterprise only - Administrator credentials provide unrestricted access to a Splunk platform instance and should be the first thing you change and secure. See Secure your Admin password. - Access control lists prevent unauthorized user access to your Splunk platform instance. See Use Access Control Lists. - Splunk Enterprise has the following additional authentication options: - Single sign-on with multi-factor authentication (MFA) - Proxy Single Sign-on - Reverse-proxy Single Sign-on with Apache - A scripted authentication API for use with an external authentication system, such as Pluggable Authentication Modules (PAM) or Remote Access Dial-In User Server (RADIUS). See Set up user authentication with external systems. analytics workspace. See Charts in the Splunk Analytics Workspace in the Splunk Analytics Workspace Using the Splunk Analytics Workspace manual. Audit your Splunk Enterprise instance regularly Audit events provide information about what has changed in your Splunk platform instance configuration. It gives you the where and when, as well as the identity of who implemented the change. - Audit your system regularly to monitor user and administrator access, as well as other activities that could tip you off to unsafe practices or security breaches. - Keep an eye on activities within your Splunk platform deployment,!
https://docs.splunk.com/Documentation/Splunk/8.0.8/Security/Hardeningstandards
2022-05-16T15:24:55
CC-MAIN-2022-21
1652662510138.6
[]
docs.splunk.com
Performance metrics Performance metrics are measurements that quantitatively calculate your model’s performance. The metrics consider what the model predicted (prediction) against what actually happens (label). The metrics supported by Superwise include: RMSE, MSE, MAE, MAPE, Accuracy, Recall, Precision, F1, Log Loss and ROC AUC. These scores are calculated from the time the metric was created, and not historically. Pro tip Configure your model's performance metrics as soon as you connect it to Superwise. You can use these metrics, depending on the type of your label and prediction, as follows: To add performance metrics, go to the Trends screen and press Add metric, then choose Set performance metric. The first things you need to do are to: (1) Define the groups of data you are measuring (2) Set the performance group name (3) Select the prediction and label entities (4) Choose your preferred metrics using the drop-down menu When the prediction or label type is categorical, you will have to define the positive class. A positive class will be the category you wish to present as true. All the other categories will be presented as false. For example - if the prediction categories are [ Dog, Cat, Snake] and the label categories are [ four legs, no legs] Then the prediction’s positive value will be ‘Snake’ and the label’s positive value will be ‘no legs’. If you want your positive value to be ‘Dog and Cat’ and the prediction’s positive value to be ‘four legs’, you will need to create two different Performance groups: one for the cat and one for the dog. After you finish setting the metric, you will be able to use it in your monitoring policies. Keep in mind that you can also add the metric when you create a new policy. Updated about 1 month ago
https://docs.superwise.ai/docs/performance-metrics-1
2022-05-16T15:22:19
CC-MAIN-2022-21
1652662510138.6
[]
docs.superwise.ai
Operation could destabilize the runtime exception occurs when running on Windows Azure Environment Error Message Internal Server Error: Operation could destabilize the runtime. Cause This problem occurs when you have published the application with IntelliTrace enabled. Description After publishing to Windows Azure, instead of returning the rendered document, Telerik Reporting WebAPI REST service throws the described error. Solution In order Telerik Reporting WebAPI REST service to function correctly you have to re-deploy with the IntelliTrace feature turned off. See Also Debugging a Published Cloud Service with IntelliTrace and Visual Studio.
https://docs.telerik.com/reporting/knowledge-base/operation-could-destabilize-the-runtime-exception-occurs-when-running-on-windows-azure
2022-05-16T16:36:55
CC-MAIN-2022-21
1652662510138.6
[]
docs.telerik.com
Auto Login Credentials API overview Use the Auto Login Credentials API to automate the login process in order to remotely run bots on locked devices. Users with an AAE_Admin role can create, update, or delete the login credentials. Overview When a bot is deployed from the Control Room to the Bot Runner, the bot logs in to the Bot Runner using the credentials stored in the Credential Vault. These credentials are set by the user using the of the Enterprise Client. Each time a user's Windows password is modified, the user must update the new password in the Enterprise Client. To automate this process, use the following URLs to create, update, or delete the login credentials that are stored in the Credential Vault. Create auto login credential values { "Username":"<domain\\username>", "Windows_Username":"<domain\\your username>", "Windows_Password":"<your password>" } Example: Create auto login credentials Update auto login credential values PUT Example: Update auto login credentials Delete auto login credential values DELETE Example: Delete auto login credentials
https://docs.automationanywhere.com/ko-KR/bundle/enterprise-v11.3/page/enterprise/topics/control-room/control-room-api/bot-login-api-overview.html
2022-05-16T15:12:32
CC-MAIN-2022-21
1652662510138.6
[]
docs.automationanywhere.com
My Calendar Events in October 2021 View as GridMonthWeekDay Month January February March April May June July August September October November December Year 2021 2022 2023 2024 2025 2026 PreviousNext October 31, 2021 (1 event) Test Event Category: GeneralTest Event N/A October 31, 2021 This is the description for the event. This event is actually at the root level of this multisite network, but is viewable from the subsite. More about Test Event Categories GeneralGeneral Subscribe in GoogleSubscribe in Outlook
https://docs.joedolson.com/my-calendar-2/?yr=2021&month=10&dy&cid=my-calendar&format=list
2022-05-16T14:32:03
CC-MAIN-2022-21
1652662510138.6
[]
docs.joedolson.com
module Advanced Secure Mailer Introduction With this add-on module, you can extend the PepperShop with the Swift e-mail subsystem. With this system, S/MIME signed e-mails can be sent with the main address of the shop. For this purpose, a corresponding e-mail SSL certificate is uploaded and configured (passphrase / intermediate certificates). The customer can now verify the e-mails as they have a valid signature. Configuration Install the module in the shop administration under Modules -> Module Administration. » Cash balance. » Consent Manager Introduction The General Data Protection Regulation (GDPR) came into force on 25 May 2018. Since the end of 2019, consent managers have also been a major topic for online shops. Website visitors should be able to configure in a differentiated way for all services used, whether the cookie may be set or not. Especially for third party or tracking cookies. On-page cookies, which are required immediately to deliver the service for which the page was created, are always permitted. » Content Slider. » Customer Dependent Pricing / Line Discounts. » DHL Introduction This module allows shop operators to send products with the package service provider DPD. This module is optionally available for the PepperShop and must be purchased separately. The module allows you to generate a setup certificate with the place and time of delivery of the package directly from the web shop. Installation To install the module in the PepperShop, go to the shop administration of your own PepperShop and select the menu item ‘Modules'> ‘Module management’. » digitec Galaxus AG Introduction The Galaxus module enables you to offer your products on digitec Galaxus AG too. Manage the articles in one place, in the PepperShop. Sell on two platforms, PepperShop and digitec Galaxus. The Galaxus module provides you with the interface for this, in which you can carry out product master synchronisation, among other things, import orders from the marketplace and, if required, also forward them to a connected ERP for processing and use the digitec Galaxus AG supplier interface. » DPD Introduction This module allows shop operators to send products with the package service provider DPD. This module is optionally available for the PepperShop and must be purchased separately. The module allows you to generate a shipping label with the place and time of delivery of the package directly from the web shop. Installation To install the module in the PepperShop, go to the shop administration of your own PepperShop and select the menu item * Module -> Module management *. » Facebook Shopping Introduction For a long time now, social media has meant more than just posting, liking and sharing. The Facebook shop has been available in Switzerland since 2018. The aim is for companies to offer and sell their products directly via Facebook without the buyers having to switch to an external online shop. How do you get your products into the Facebook shop? With the Facebook Shopping module, you can easily link the products from your online shop to your Facebook and Instagram account - conveniently and quickly. » Google Shopping With this external PepperShop module, you can deliver items to Google Shopping (formerly Google Products, Google Product Search or Froogle). Module installation After copying the files you can go to the shop administration and change there to the menu ‘Module’. On the left side, with the not yet installed modules, one sees now the Google Shopping module listed. Now you have to select the Google Shopping module and click on the ‘Install’ button. »
https://docs.peppershop.com/latest/en/tags/module/
2022-05-16T14:30:52
CC-MAIN-2022-21
1652662510138.6
[]
docs.peppershop.com
First, you need to configure the master file. This is because all module functions require either a configured api_key (for Cloud) or a ttp_user with a tpp_password and a base_url (for Trust Platform). For Venafi Cloud: venafi: api_key: abcdef01-2345-6789-abcd-ef0123456789 base_url: " (optional) If you don't have a Venafi Cloud account, you can sign up for one on the enrollment page. For Venafi Platform: venafi: base_url: " tpp_user: admin tpp_password: "Str0ngPa$$w0rd" trust_bundle: "/opt/venafi/bundle.pem" It is not common for the Venafi Platform's REST API (WebSDK) to be secured using a certificate issued by a publicly trusted CA, therefore establishing trust for that server certificate is a critical part of your configuration. Ideally this is done by obtaining the root CA certificate in the issuing chain in PEM format and copying that file to your Salt Master (e.g. /opt/venafi/bundle.pem). You then reference that file using the 'trust_bundle' parameter as shown above. For the Venafi module to create keys and certificates it is necessary to enable external pillars. This is done by adding the following to the /etc/salt/master file: ext_pillar: - venafi: True This command is used to enroll a certificate from Venafi Cloud or Venafi Platform. minion_id ID of the minion for which the certificate is being issued. Required. dns_name DNS subject name for the certificate. Required if csr_path is not specified. csr_path Full path name of certificate signing request file to enroll. Required if dns_name is not specified. zone Venafi Cloud zone ID or Venafi Platform folder that specify key and certificate policy. Defaults to "Default". For Venafi Cloud, the Zone ID can be found in the Zone page for your Venafi Cloud project. org_unit Business Unit, Department, etc. Do not specify if it does not apply. org Exact legal name of your organization. Do not abbreviate. loc City/locality where your organization is legally located. state State or province where your organization is legally located. Must not be abbreviated. country Country where your organization is legally located; two-letter ISO code. key_password Password for encrypting the private key. The syntax for requesting a new certificate with private key generation looks like this: salt-run venafi.request minion.example.com dns_name= \ country=US state=California loc=Sacramento org="Company Name" org_unit=DevOps \ zone=Internet key_password=SecretSauce And the syntax for requesting a new certificate using a previously generated CSR looks like this: salt-run venafi.request minion.example.com csr_path=/tmp/minion.req zone=Internet This command is used to show last issued certificate for domain. dns_name DNS subject name of the certificate to look up. salt-run venafi.show_cert This command lists domains that have been cached on this Salt Master. salt-run venafi.list_domain_cache To transfer a cached certificate to a minion, you can use Venafi pillar. Example state (SLS) file: /etc/ssl/cert/: file.managed: - contents_pillar: venafi: - replace: True /etc/ssl/cert/: file.managed: - contents_pillar: venafi: - replace: True /etc/ssl/cert/: file.managed: - contents_pillar: venafi: - replace: True
https://docs.saltproject.io/en/3001/topics/venafi/index.html
2022-05-16T16:11:41
CC-MAIN-2022-21
1652662510138.6
[]
docs.saltproject.io
If Airlock Secure Session Transfer (SST) is configured, this chapter can be skipped. Procedure-related prerequisites - None. Instruction - 1.Go to: Application Firewall >> Reverse Proxy. - 2.Edit the sharepoint.ext.virtinc.com virtual host. - 3.Change to the Advanced tab. - 4.Append an expiry date in the Session cookie path field. - Session cookie path (current): / - Session cookie path (new): /; expires=Fri, 31-Jan-2050 12:00:00 GMT - 5.Click on the Activate button. - The configuration has been updated successfully.
https://docs.airlock.com/gateway/7.8/data/configurethe_3.html
2022-05-16T16:13:17
CC-MAIN-2022-21
1652662510138.6
[]
docs.airlock.com
Secure attribute The Secure attribute is automatically set on any cookie with "-S" suffix. HttpOnly attribute To prevent scripting access and mitigate cookie stealing, Airlock session cookies have the HttpOnly flag set and are not accessible for active client-side components like JavaScript, Flash, ActiveX, Java Applets, etc. The flag is also set for load balancing cookies but not for CSRF token cookies because the CSRF protection feature can not work if the flag is set. The flag can be disabled for the session and load balancing cookie with Security Gate Expert Settings. SameSite attribute The SameSite attribute is set to the enforcement mode Lax for the Airlock session cookies to prevent CSRF attacks. The SameSite mode for CSRF token cookies is set to Strict. The attribute is not set for the load balancing cookie, because this cookie is not security-critical. The SameSite enforcement mode can be configured for all Airlock Gateway session cookies with Security Gate Expert Settings.
https://docs.airlock.com/gateway/7.8/data/cookiesecuri.html
2022-05-16T14:24:06
CC-MAIN-2022-21
1652662510138.6
[]
docs.airlock.com
You can load themes in a InfoConnect Terminal User Control (TUC) at runtime. This can be useful when you want to indicate which TUC has focus on forms that have more than one TUC. This example uses the InfoConnect .NET API Theme object to dynamically load themes in IBM terminal user controls when the focus changes. In this example, the TUC on the left has focus, as indicated by the theme with the black background. Note: The InfoConnect TUC support for Windows user control events is limited to the Focus events. To handle mouse, keyboard, or rendering events on the TUC, use the InfoConnect .NET API events.
https://docs.attachmate.com/Luminet/INFOConnect/16-1/airlines-net-prog-guide/loading-themes-in-tuc.html
2022-05-16T14:47:18
CC-MAIN-2022-21
1652662510138.6
[]
docs.attachmate.com
. Include variable- width fonts Select to increase your font choices. Unicode zero character setting Select a different style of zero, if supported. Blink rate Specify the speed at which the cursor blinks. Graphics Cursor Specify the shape for the graphics cursor. Crosshair color Specify the color of the graphics cursor crosshair. Rule Line Show rule lines Select to display rule lines, which provide a visual cue to your location on the screen. Appearance Specify the type of rule lines: a vertical line, a horizontal line, or crosshair lines. Terminal color Click Change to specify the foreground (text) and background colors for all terminal session screens. Terminal item Click Change to specify foreground and background colors for terminal items. You can specify different colors for different types of fields: protected (read-only) and unprotected, highlighted and normal, and alpha and numeric. Terminal graphics color Click Change to specify the foreground (text) colors for all terminal graphics screens. Background Click Change to specify the background color for terminal graphics screens. Related Topics Manage Themes Dialog Box Specify Trusted Locations Dialog Box
https://docs.attachmate.com/Reflection/2011/R2/TP1-help-english/settings_theme_3270_cs.htm
2022-05-16T16:11:14
CC-MAIN-2022-21
1652662510138.6
[]
docs.attachmate.com
A script file is an ASCII text file that contains a sequence of FTP (or SFTP) commands. You can use comments to explain one or more lines of code. For example: ;The following lines connect to the server and change ;the working directories to PREPRESS (client) and ;PRESS (server). open forum thomasp XOYRCNEL973L9L96O376ONMO770L35L7NMO87PM79 lcd c:\prepress cd /press You can also add a comment at the end of a command. For example: set transfer-disposition unique ;do not overwrite files mput script is s*.doc ;copy the .DOC files Note: Semicolons are not supported for comments in scripts supplied to the sftp command line using the -B option. Use the number sign (#) to mark comments in these batch files. Related Topics Edit a Script Password Security within Scripts Commands for Error Handling FTP Client Scripting
https://docs.attachmate.com/Reflection/2011/R2/help/en/user-html/7472.htm
2022-05-16T15:21:18
CC-MAIN-2022-21
1652662510138.6
[]
docs.attachmate.com
By default, pools are started when the Runtime Service started. You can use the Pool Scheduler to set up a schedule for starting and stopping pools, or require a pool to be started manually by clearing the Automatically Start Pool option on the General Configuration page. For more information about creating a pool schedule, see Creating Pool Schedules. For information about editing selections in the General Configuration page, see Modifying Settings for an Existing Pool.
https://docs.attachmate.com/Synapta/ServicesBuilder/3.0/documentation/guide_sc/content/screens_runtime_sessionpools_startstop_pr.htm
2022-05-16T15:29:47
CC-MAIN-2022-21
1652662510138.6
[]
docs.attachmate.com
EJBs are created and packaged with a resource adapter, to be deployed as an EJB JAR file and a resource adapter RAR file, to your selected application server on a per task file basis. Task files can contain multiple tasks. Each packaged EJB is configured for a specific instance of an application server and each application server has a different set of configuration settings. The J2EE Session EJB wizard walks you through the configuration and deployment process for each type of application server.
https://docs.attachmate.com/Synapta/ServicesBuilder_CICS/3.0/documentation/guide_cics/content/apps_j2ee_package_pr.htm
2022-05-16T16:23:09
CC-MAIN-2022-21
1652662510138.6
[]
docs.attachmate.com
Use this page to define the authentication settings needed to connect to the host. These parameters provide access control by identifying users to the system; verifying users of the system; and authorizing access to protected resources. Use the Following as Defaults to Perform Security Check: Select this option to specify default authentication settings. For example, with this option selected, if the incoming task does not contain a username and password, then the values specified here will be used. RACF Authentication Parameters Group: Enter the group name to be used if the incoming task doesn't include one. User ID: Enter the user ID to be used if the incoming task doesn't include one. Password: Enter the password to used if the incoming task doesn't include one.
https://docs.attachmate.com/Synapta/ServicesBuilder_IMS/3.0/documentation/connector_ims/mcs_ims/connection_authentication_pp.htm
2022-05-16T16:20:47
CC-MAIN-2022-21
1652662510138.6
[]
docs.attachmate.com
Call Parking for Grandstream GXP-21XX Grandstream Grandstream Key on a Grandstream Grandstream under Settings > Programmable Keys. - MPK Mode - Monitored Call Park - Description - Your description - Value - Parking lot extension -2i4.png) Transferring and Retrieving Calls Once the queue has been created and a Call Park key has been configured as above, you may park any active call to the queue by pressing the key labeled Call Park. Once placed into the queue, the LED of Line Key 1 will turn red, denoting that a call has been parked. To retrieve the call, any user with that Call Park key can touch the key to pickup the call.
https://docs.skyswitch.com/en/articles/523-call-parking-for-grandstream-gxp-21xx-phones
2019-08-17T17:01:27
CC-MAIN-2019-35
1566027313436.2
[array(['https://cdn.elev.io/file/uploads/0tJoQ5wAjBScWN2SZmhuBkcSFX9jRDbGB-U4x2fIfSE/gWy0YePIMembANJx6Y2550qRdHa0zNtMMLy6X26dpxA/e7496b1d-1a5f-4262-a7cf-b9b8ea35ca07-Rwg.png', 'https://cdn.elev.io/file/uploads/0tJoQ5wAjBScWN2SZmhuBkcSFX9jRDbGB-U4x2fIfSE/gWy0YePIMembANJx6Y2550qRdHa0zNtMMLy6X26dpxA/e7496b1d-1a5f-4262-a7cf-b9b8ea35ca07-Rwg.png https://cdn.elev.io/file/uploads/0tJoQ5wAjBScWN2SZmhuBkcSFX9jRDbGB-U4x2fIfSE/gWy0YePIMembANJx6Y2550qRdHa0zNtMMLy6X26dpxA/e7496b1d-1a5f-4262-a7cf-b9b8ea35ca07-Rwg.png'], dtype=object) array(['https://cdn.elev.io/file/uploads/0tJoQ5wAjBScWN2SZmhuBkcSFX9jRDbGB-U4x2fIfSE/3BTC67dVnipkWKrJFpgG_YJcxKDV2U4w11p78jG7snA/Screen Shot 2015-07-22 at 10.00.18 PM-LLg.png', 'https://cdn.elev.io/file/uploads/0tJoQ5wAjBScWN2SZmhuBkcSFX9jRDbGB-U4x2fIfSE/3BTC67dVnipkWKrJFpgG_YJcxKDV2U4w11p78jG7snA/Screen Shot 2015-07-22 at 10.00.18 PM-LLg.png https://cdn.elev.io/file/uploads/0tJoQ5wAjBScWN2SZmhuBkcSFX9jRDbGB-U4x2fIfSE/3BTC67dVnipkWKrJFpgG_YJcxKDV2U4w11p78jG7snA/Screen Shot 2015-07-22 at 10.00.18 PM-LLg.png'], dtype=object) array(['https://cdn.elev.io/file/uploads/0tJoQ5wAjBScWN2SZmhuBkcSFX9jRDbGB-U4x2fIfSE/j3pxj6YFsa640q-gu__TnUVOfvnYL1vOvZy791RC4JQ/Call Park Queue Listing-6z8.jpg', 'https://cdn.elev.io/file/uploads/0tJoQ5wAjBScWN2SZmhuBkcSFX9jRDbGB-U4x2fIfSE/j3pxj6YFsa640q-gu__TnUVOfvnYL1vOvZy791RC4JQ/Call Park Queue Listing-6z8.jpg https://cdn.elev.io/file/uploads/0tJoQ5wAjBScWN2SZmhuBkcSFX9jRDbGB-U4x2fIfSE/j3pxj6YFsa640q-gu__TnUVOfvnYL1vOvZy791RC4JQ/Call Park Queue Listing-6z8.jpg'], dtype=object) array(['https://cdn.elev.io/file/uploads/0tJoQ5wAjBScWN2SZmhuBkcSFX9jRDbGB-U4x2fIfSE/j_koWyVJYqR3Xj8XP5L_LJvh4W2fSu72BmkX8aMLr6Q/User 712-A-s.png', 'https://cdn.elev.io/file/uploads/0tJoQ5wAjBScWN2SZmhuBkcSFX9jRDbGB-U4x2fIfSE/j_koWyVJYqR3Xj8XP5L_LJvh4W2fSu72BmkX8aMLr6Q/User 712-A-s.png https://cdn.elev.io/file/uploads/0tJoQ5wAjBScWN2SZmhuBkcSFX9jRDbGB-U4x2fIfSE/j_koWyVJYqR3Xj8XP5L_LJvh4W2fSu72BmkX8aMLr6Q/User 712-A-s.png'], dtype=object) array(['https://cdn.elev.io/file/uploads/0tJoQ5wAjBScWN2SZmhuBkcSFX9jRDbGB-U4x2fIfSE/IFbekoSQxTmJoCZaPqXqK3SywEQ_92DkxJXRpLo9U0U/unnamed (1', 'https://cdn.elev.io/file/uploads/0tJoQ5wAjBScWN2SZmhuBkcSFX9jRDbGB-U4x2fIfSE/IFbekoSQxTmJoCZaPqXqK3SywEQ_92DkxJXRpLo9U0U/unnamed (1)-2i4.png https://cdn.elev.io/file/uploads/0tJoQ5wAjBScWN2SZmhuBkcSFX9jRDbGB-U4x2fIfSE/IFbekoSQxTmJoCZaPqXqK3SywEQ_92DkxJXRpLo9U0U/unnamed (1)-2i4.png'], dtype=object) ]
docs.skyswitch.com
Everything Export Bitmap Dialog Box The Export Bitmap window lets you export a storyboard project to bitmap files in .jpg, .tga, .psd or .png format. The exported data includes a separate bitmap file for each panel in the storyboard. NOTE For .psd files, the transform and transition animations are not exported. However, camera moves are rendered into an independent layer. NOTE For tasks related to this window, see Exporting Bitmap Images. How to access the Export Bitmap window Select File > Export Bitmap.. Setup Bitmap Export Parameters Scene Names and Panel Numbers Prints the scene names and panel numbers as an overlay on your video. Lets you view the location and contents of the exported folder when it is ready.
https://docs.toonboom.com/help/storyboard-pro-6/storyboard/reference/dialogs/export-bitmap-window.html
2019-08-17T17:43:39
CC-MAIN-2019-35
1566027313436.2
[]
docs.toonboom.com
[−][src]Macro futures:: join Polls multiple futures simultaneously, returning a tuple of all results once complete. While join!(a, b) is similar to (a.await, b.await), join! polls both futures concurrently and therefore is more efficent. This macro is only usable inside of async functions, closures, and blocks. It is also gated behind the async-await feature of this library, which is not activated by default. Examples #![feature(async_await)] use futures::{join, future}; let a = future::ready(1); let b = future::ready(2); assert_eq!(join!(a, b), (1, 2));
https://docs.rs/futures-preview/0.3.0-alpha.17/futures/macro.join.html
2019-08-17T18:05:40
CC-MAIN-2019-35
1566027313436.2
[]
docs.rs
Antialiasing OpenGL Lines T-SBADV-005-004 Everything you draw in Storyboard Pro is vector-based. When you draw in the Drawing You must restart Storyboard Pro after you change the parameters. - Do one of the following: - Select Edit > Preferences (Windows) or Storyboard Pro > Preferences (macOS). - Press Ctrl + U (Windows/Linux) or ⌘ + U (macOS). The Preferences dialog box opens. - In the Advancedtab, select the Enable option Storyboard Pro.
https://docs.toonboom.com/help/storyboard-pro-6/storyboard/drawing/antialias-opengl-line.html
2019-08-17T17:06:35
CC-MAIN-2019-35
1566027313436.2
[array(['../../Resources/Images/HAR/Stage/Drawing/ANI_noantialiasing_001.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/Stage/Paperless/HAR11_fullscene_antialiasing.png', None], dtype=object) ]
docs.toonboom.com
Animating Layers T-SBFND-009-008 By default, layers in Storyboard Pro are static, but they can be animated. A layer is animated by setting it in different positions, angles or sizes at two different frames in the timeline, then letting Storyboard Pro calculate the position, angle and size of the layer for each frame between those two frames. The frames at the beginning and end of an animation are referred to as keyframes. You can animate a layer by enabling animation on it, which will create a keyframe for it in its current position, at the current frame. From there, you can just go to another frame and change its position, which will automatically create another keyframe at the current frame. At this point, the layer is already animated from its original position to the position you just moved it to. - In the Timeline view, select the panel with the layer you want to animate. - Move the Timeline cursor to the exact time where you want to create the first keyframe for your animation. - In the Layer panel of the Stage or Camera view or in the Layers view, click on the Animate button of the 3D object you wish to animate. The Animate button turns yellow and changes shape: . In the Layer Animation track of the Timeline view, a keyframe is created at the current frame. This keyframe stores the position, angle and size of the layer at the current frame. - In the Tools toolbar, select the Layer Transform tool. In the Stage or Camera view, the controls of the Layer Transform tool appear around the selected drawing layer. - Using the manipulator box, transform the layer so that it is in the position, angle and size you want it to be at the beginning of the animation: - To move the layer, either click on its artwork or on the blue point in the centre, then drag it to the desired position. - If you are having trouble dragging the layer by its artwork, you can also click and drag on the blue point in the centre. However, this point may be obstructed by the layer's pivot point . If that is the case, you can click and drag on the pivot point to move it out of the way, then click and drag on the centre point to move the layer. - You can also nudge the selection by using the arrow keys on your keyboard. - You can also enter specific coordinates in the Horizontal Offset and Vertical Offset fields in the Tool Properties view. - To scale the layer, click on one of the squares at the edges or corners of the manipulator box and drag them to stretch or shrink the drawing. - You can preserve the horizontal and vertical proportions of the selection by holding the Shift key. - You can also enter specific scale percentages in the Horizontal Scale and Vertical Scale fields in the Tool Properties view. - To rotate the layer, move the cursor just outside of one of the corners of the manipulator box until the mouse cursor becomes . Then, click and drag in either direction to rotate the layer clockwise or counterclockwise. - You can rotate the artwork in 15° increments by hold the Shift key. - You can also enter a specific angle in degrees in the Angle field of the Tool Properties view. - You can also rotate the clip in 90° increments by clicking on the Rotate 90 CW button in the Tool Properties view to rotate it 90° clockwise, or on the Rotate 90 CCW button to rotate it 90° counterclockwise. - To flip the layer horizontally, click on the Flip Horizontally button in the Tool Properties view. - To flip the layer vertically, click on the Flip Vertically button in the Tool Properties view. - To reset a layer to its original position, scale and angle, do one of the following: - Select Layer > Reset Transform. - Press Ctrl + R (Windows) or ⌘ + R (Mac OS X). - In the Timeline view, move the cursor to the frame where you want to create your second keyframe. - In the Stage or Camera view, use the manipulator box to transform the layer so that it is in the position, angle and size you want it to be at the end of the animation. A keyframe is created at the current frame as soon as you make the first transformation. - Move the timeline cursor back to the beginning of the panel. - In the Playback toolbar, click on the Play button to preview the animation. If you want to start over, you can instantly delete a layer's animation by disabling animation on it. - Select a layer that contains animation. This is indicated by the yellow Animate icon. - Click the Animate icon. The following message appears when turning off the Animate mode. - Click OK. All keyframes are cleared and the layer remains at the position of the current frame. The Animate icon on the layer turns grey to indicate that it does not contain animation.
https://docs.toonboom.com/help/storyboard-pro-6/storyboard/motion/animate-layer.html
2019-08-17T17:24:01
CC-MAIN-2019-35
1566027313436.2
[array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/SBP/Steps/animate-layer-1.png', None], dtype=object) array(['../../Resources/Images/SBP/Steps/animate-layer-2.png', None], dtype=object) array(['../../Resources/Images/SBP/Steps/animate-layer-3.png', None], dtype=object) array(['../../Resources/Images/SBP/Steps/animate-layer-4.png', None], dtype=object) array(['../../Resources/Images/SBP/Steps/animate-layer-5.png', None], dtype=object) array(['../../Resources/Images/SBP/Steps/animate-layer-6.png', None], dtype=object) array(['../../Resources/Images/SBP/Steps/animate-layer-7.png', None], dtype=object) array(['../../Resources/Images/SBP/Steps/animate-layer-8.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/SBP/animate-mode-msg.png', None], dtype=object) ]
docs.toonboom.com
Pricing and Duration Best Practices Section A. Plan Duration Lengths Mulberry can currently offer between 1 and 3 different duration lengths on a warrantable product.The available plan duration lengths are 1, 2, 3, 4, 5 and 10 years, and are configurable by product category within the retailer’s assortment. For example, Indoor Furniture may offer 2-3-5 year durations, while Mattresses might offer 5-10 year durations. At time of integration, Mulberry will make the appropriate plan duration length recommendation for each category based on the business information provided in the Mulberry Retailer Startup Questionnaire. Retailers have the option to launch with Mulberry’s recommended durations, or nuance them as they see fit. It’s important to align duration lengths with the average product selling price and the shopper’s expected “use life” of your categories. Common examples: Category (Avg. Price): Duration Lengths: Furniture (> $750) 2-3-5 Years, or 3-5 Years Gadgets (< $500) 1-2-3 Years, or 2-3 Years Smart Fitness (> $1,000) 2-3-5 Years, or 3-5 Years Home Goods (< $250) 1-2-3 Years, or 1-3 Years BBQ Grills (> $500) 2-3-5 Years, or 3-5 Years Appliances (< $500) 1-2-3 Years, or 1-3 Years Appliances (> $750) 2-3-5 Years, or 3-5 Years Mattresses (any price) 10 Years, or 5-10 Years Luggage & Bags (> $200) 2-3-5 Years, or 3-5 Years Footwear/Apparel (any price) 1-2-3 Years, or 1-3 Years Watches/Jewelry (> $500) 2-3-5 Years, or 3-5 Years Section B. Plan Pricing Mulberry’s finance team will provide you with appropriate customer-facing price recommendations by category and duration, based on your company’s attachment, revenue and gross margin goals. Retailers have the option to launch with Mulberry’s recommended pricing, or nuance it as they see fit. Merchandised plan prices are recommended in order to incrementally improve revenue & gross margin without impacting attachment. Common Examples: Calculated Price: $22.75 Merchandised Price: $24.99 Calculated Price: $51.25 Merchandised Price: $49.99 Calculated Price: $166.00 Merchandised Price: $169.00 Plan price endings are recommended to be consistent, and to match your prevailing price endings already in use. This is in order to minimize unintentional cognitive dissonance between your merchandise prices and your Mulberry plan prices. Common Examples: Merchandise Price: $199.99 Mulberry Plan Price: $24.99 Merchandise Price: $149.95 Mulberry Plan Price: $19.95 Merchandise Price: $1,450 Mulberry Plan Price: $149 Merchandise Price: $1,450.00 Mulberry Plan Price: $149.00 Updated about 1 month ago
https://docs.getmulberry.com/docs/pricing-duration-best-practices
2022-06-25T02:11:14
CC-MAIN-2022-27
1656103033925.2
[]
docs.getmulberry.com
Okta Single sign-on with Okta This tutorial will explain configuring Okta for single sign-on to Pritunl. Users will authenticate through Okta when downloading VPN profiles. After a user has downloaded a VPN profile the Pritunl server will use the Okta API to verify that the user still exists and is enabled before each VPN connection. Okta Push Okta Push can be enabled for each VPN connection if it is available. If you are using Okta Push but do not want it used for VPN connections uncheck Enable Okta Push in the settings. This configuration option is stored in the database and will only need to be run on one host. The change will be immediately applied to all hosts and will not require restarting any hosts. Create Pritunl App on Okta In the Applications section of the admin interface click Add Application. Then click Create New App and select SAML 2.0 Next name the app Pritunl and download the Okta Pritunl logo pritunl.com/img/pritunl_okta.png and click Upload Logo then click Next. On the next page enter as the Single sign on URL and pritunl as the Audience URI. Set the Default RelayState to the address your users would use to access the Pritunl server such as. Then add the two attributes username with a value of user.login and user.email. Once done click Next then Finish. Setting User Organization By default all Okta users will be added to the default organization set in the Pritunl settings. Users can be added to a specific organization using the org attribute. This attribute can be mapped to a value such as user.department. The value of the attribute must exactly match the name of an existing organization on the Pritunl server. If a value is given for an organization that does not exist the user will be added to the default organization. Okta provides several mapped values for attributes. Refer to the Okta documentation for setting the value of the org attribute to best match Pritunl organizations. Create API Token Pritunl will require an API token to validate if a user exists and is enabled before allowing a VPN connection. To create a token click Security then API and Create Token. Name the token Pritunl and save the token for later. Add Users to Pritunl App After the Okta app has been created you will need to add users to the Pritunl app before they are able to use it. This can be done in the People tab on the Pritunl app settings on Okta. Okta App ID Next get the Okta app ID from the url in the Okta application settings. The ID is the last component of the URL. For example the ID for this url is 0oarolrfv30ouSTcm2p6. This ID will be needed in the next step. Configure Pritunl Once the Okta app has been configured click on the app then click Sign On and View Setup Instructions. Then open the Pritunl settings and set Single Sign-On to Okta and set the Single Sign-On Organization. This organization will be the default organization Okta users are added to. Set the Okta App ID to the ID from the previous step. Then copy the Identity Provider Single Sign-On URL to SAML Sign-On URL. Then Identity Provider Issuer to SAML Issuer URL. Then X.509 Certificate to SAML Certificate. Use the API token from earlier to fill in Okta API Token. Updated over 4 years ago
https://docs.pritunl.com/docs/okta
2022-06-25T01:14:25
CC-MAIN-2022-27
1656103033925.2
[array(['https://files.readme.io/22512ad-okta0.png', 'okta0.png'], dtype=object) array(['https://files.readme.io/22512ad-okta0.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/17292b6-okta1.png', 'okta1.png'], dtype=object) array(['https://files.readme.io/17292b6-okta1.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/dd5fdf3-okta3.png', 'okta3.png'], dtype=object) array(['https://files.readme.io/dd5fdf3-okta3.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/2289c81-okta4.png', 'okta4.png'], dtype=object) array(['https://files.readme.io/2289c81-okta4.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/36aa4c3-okta5.png', 'okta5.png'], dtype=object) array(['https://files.readme.io/36aa4c3-okta5.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/beeafa8-okta6.png', 'okta6.png'], dtype=object) array(['https://files.readme.io/beeafa8-okta6.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/9ffb6ae-okta7.png', 'okta7.png'], dtype=object) array(['https://files.readme.io/9ffb6ae-okta7.png', 'Click to close...'], dtype=object) ]
docs.pritunl.com
Manage the Windows System PATH salt.states.win_path. absent(name)¶ Remove the directory from the SYSTEM path Example: 'C:\sysinternals': win_path.absent salt.states.win_path. exists(name, index=None)¶ Add the directory to the system PATH at index location Position where the directory should be placed in the PATH. This is 0-indexed, so 0 means to prepend at the very start of the PATH. Note If the index is not specified, and the directory needs to be added to the PATH, then the directory will be appended to the PATH, and this state will not enforce its location within the PATH. Examples: 'C:\python27': win_path.exists 'C:\sysinternals': win_path.exists: - index: 0 'C:\mystuff': win_path.exists: - index:
https://docs.saltproject.io/en/3001/ref/states/all/salt.states.win_path.html
2022-06-25T01:53:38
CC-MAIN-2022-27
1656103033925.2
[]
docs.saltproject.io
Dweet Dweet () is easy to use platform that allows for user messages to be published and subscribed to it was designed for machine-to-machine messaging for IoT. To use Dweet you just choose a unique name, This name is called a "Thing", it is a grouping for all your messages. Specify your "Thing" in the Dweet Notification Target, the Key is an optional configuration value that is only needed if you want to Lock your "Thing" to keep your messages private. Messages will be forwarded in JSON format (See Default Data Fields). Platform Setup Just choose a unique my-thing-name for your "Thing" (must not include spaces). You can view posts to your "Thing" at Notifier Setup Specify the my-thing-name of your "Thing" in the Notification Target field. Provide the Key to your "Thing" if it has been created as private.
https://docs.senraco.io/dev/streaming/Dweet/
2022-06-25T01:52:11
CC-MAIN-2022-27
1656103033925.2
[]
docs.senraco.io
Cluster Alerts How to be notified about cluster issues The Cluster Alerts module allows users to set up when they are notified for a set fo events. Alerts can either be a popup, displaying the alert when the user is logged into Altinity.Cloud, or email so they can receive an alert even when they are not logged into Altinty.Cloud. To set which alerts you receive: From the Clusters view, select the cluster to for alerts. Select Alerts. Add the Email address to send alerts to. Select whether to receive a Popup or Email alert for the following events: - ClickHouse Version Upgrade: Alert triggered when the version of ClickHouse that is installed in the cluster has a new update. - Cluster Rescale: Alert triggered when the cluster is rescaled, such as new shards added. - Cluster Stop: Alert triggered when some event has caused the cluster to stop running. - Cluster Resume: Alert triggered when a cluster that was stopped has resumed operations. Last modified 0001.01.01
https://beta.docs.altinity.com/altinitycloud/administratorguide/clusters/clusteralerts/
2022-06-25T01:29:33
CC-MAIN-2022-27
1656103033925.2
[]
beta.docs.altinity.com
What touches and events are we scoring? Which touches and events are we scoring? If you have any of the following questions, this dashboard is worth a view: - Are all my events being scored? - Am I scoring my events accurately? - Do I need to add weight to some of my higher valued events, like "Request a Demo"? - What engagements make up the majority of my scores? You can now easily answer these questions on the report, Which touches and events are we scoring? Diving into the report, you will also be able to view: - Quantity of events scored for each type - List of all events scored - The source system of the event Use the tables to search on specific events or view what touches are scored with an event type Hover over to right of the table to bring up the hamburger symbol and search function. Examples: Enter "content syndication" to see all touches that we have grouped to this type enter a Campaign Name to see all touches that were scored in the Account Engagement Screens Multipliers During your setup, we might have chosen certain titles to take the event score and multiply it by X. The default multipliers are CXO X 2, VP X 2, Directors X 1.5, and missing email X .5. The SQL VIEW For those advanced SQL, users you can view the code in your list views. This view will give you the exact weighting that was applied to your instance.
https://docs.calibermind.com/article/vuve1op8tt-what-touches-and-events-are-we-scoring
2022-06-25T01:49:15
CC-MAIN-2022-27
1656103033925.2
[array(['https://files.helpdocs.io/h1a0l0dsgy/articles/vuve1op8tt/1552085365479/screen-shot-2019-03-08-at-1-43-53-pm.png', None], dtype=object) array(['https://files.helpdocs.io/h1a0l0dsgy/articles/vuve1op8tt/1552087326179/screen-shot-2019-03-08-at-3-18-46-pm.png', 'CaliberMind Event Scored Sample Screen'], dtype=object) ]
docs.calibermind.com
Configuration¶ Configuring content elements can be done for the frontend and the backend. The easiest way to change the appearance of content elements in the frontend is by using the Constant Editor`*. These settings are global, which means they are not configurable in a single content element. Constants are predefined. TYPO3 uses TypoScript as a configuration language and is used by the frontend rendering. By overriding TypoScript you can modify the rendering for most of the frontend. For the backend, fields can be shown or hidden, depending on the fields you are using or the fields an editor is allowed to use. Configuration like this is done using Page TSconfig or User TSconfig.
https://docs.typo3.org/c/typo3/cms-fluid-styled-content/11.5/en-us/Configuration/Index.html
2022-06-25T01:37:08
CC-MAIN-2022-27
1656103033925.2
[]
docs.typo3.org
(PHP 4 >= 4.3.3) SWFMovie::setbackground — Sets the background color $red, int $green, int $blue) This function is EXPERIMENTAL. The behaviour of this function, its name, and surrounding documentation may change without notice in a future release of PHP. This function should be used: red Value of red component green Value of green component blue Value of blue component No value is returned.
https://php-legacy-docs.zend.com/manual/php4/en/swfmovie.setbackground
2022-06-25T01:08:33
CC-MAIN-2022-27
1656103033925.2
[]
php-legacy-docs.zend.com
Introduction Overview Writing security-conscious IAM Policies by hand can be very tedious and inefficient. an embedded security person on your team who can write those IAM policies for you, and there's no automated tool that will automagically sense the AWS API calls that you perform and then write them for you in a least-privilege manner. - After fantasizing about that level of automation, you realize that writing least privilege IAM Policies, seemingly out of charity, will jeopardize your ability to finish your code in time to meet project deadlines. - You use Managed Policies (because hey, why not) or you eyeball the names of the API calls and use wildcards instead so you can move on with your life. Such a process is not ideal for security or for Infrastructure as Code developers. We need to make it easier to write IAM Policies securely and abstract the complexity of writing least-privilege IAM policies. That's why I made this tool.. Before this tool, it could take hours to craft an IAM Policy with resource ARN constraints — but now it can take a matter of seconds. This way, developers only have to determine the resources that they need to access, and Policy Sentry abstracts the complexity of IAM policies away from their development processes. Writing Secure Policies based on Resource Constraints and Access Levels Policy Sentry's flagship feature is that it can create IAM policies based on resource ARNs and access levels. Our CRUD functionality takes the opinionated approach that IAC developers shouldn't have to understand the complexities of AWS IAM - we should abstract the complexity for them. In fact, developers should just be able to say... - "I need Read/Write/List access to arn:aws:s3:::example-org-sbx-vmimport" - "I need Permissions Management access to arn:aws:secretsmanager:us-east-1:123456789012:secret:mysecret" - "I need Tagging access to arn:aws:ssm:us-east-1:123456789012:parameter/test" ...and our automation should create policies that correspond to those access levels. How do we accomplish this? Well, Policy Sentry leverages the AWS documentation on Actions, Resources, and Condition Keys for AWS Services documentation to look up the actions, access levels, and resource types, and generates policies according to the ARNs and access levels. Consider the table snippet below: Policy Sentry aggregates all of that documentation into a single database and uses that database to generate policies according to actions, resources, and access levels.
https://policy-sentry.readthedocs.io/en/0.11.19/introduction/
2022-06-25T00:47:16
CC-MAIN-2022-27
1656103033925.2
[]
policy-sentry.readthedocs.io