content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Key concepts used:
- Stories are used to compose newsletters.
- Newsletters are comprised of stories.
- Campaigns send out newsletters.
Unlike your average newsletter system, Groups Newsletters combines multiple stories in a newsletter and uses campaigns to send out one or more newsletters to their recipients. The recipients for each newsletter can be different user groups (when Groups is also used).
Let’s publish!
To publish a simple one page newsletter to be sent to all subscribers, you would create a new story by going to Newsletters > New Story, give the story a title and write its content.
Then input the title of the newsletter in the Newsletters box, click Add and then click Publish to publish the story.
Go go Newsletters > Newsletters and make sure that the correct recipient groups are chosen for the newsletter.
Go to Newsletters > Campaigns, click New Campaign, give your campaign a title and under Newsletters start typing in the title of the newsletter, select it and click Add.
Click Publish and then under Status click Run to start the campaign. Your campaign will start sending out emails according to the batch sending settings.
As you get comfortable working with the system, you will realize that it allows you to be a lot more flexible than just following these steps. | http://docs.itthinx.com/document/groups-newsletters/starting-a-newsletter-campaign/ | 2018-09-18T18:09:28 | CC-MAIN-2018-39 | 1537267155634.45 | [array(['http://docs.itthinx.com/wp-content/uploads/2015/03/Screen-Shot-2015-04-06-at-11.17.37.png',
'Screen Shot 2015-04-06 at 11.17.37'], dtype=object)
array(['http://docs.itthinx.com/wp-content/uploads/2015/04/Recipients-300x278.png',
'Recipients'], dtype=object) ] | docs.itthinx.com |
About Drawing Optimization
As you work, you drawing can get complex and contain multiple strokes. You may want to optimize those drawings to reduce the number of brush strokes, pencil lines, and invisible strokes. You may also want to flatten your artwork or optimize your brush stroke textures.
The Optimize command reduces the number of layers, such as overlapping brush strokes, in the selected drawing objects. Drawing objects will only be flattened and optimized if the selected objects do not change the appearance of the final image when they are merged.
For example, if you have selected a number of partially transparent objects, which you layered to create an additive colour effect, the selected transparent drawing objects will not be merged. This is because merging the transparent drawing objects will cause them to lose the effect of the layered transparent colours.
You could also want to add invisible contour strokes so that if you unpaint lines, the vector container remains to be repainted later.
| https://docs.toonboom.com/help/harmony-14/paint/drawing/about-drawing-optimization.html | 2018-09-18T17:14:33 | CC-MAIN-2018-39 | 1537267155634.45 | [array(['../Resources/Images/HAR/Stage/Drawing/an_optimize.png', None],
dtype=object)
array(['../Resources/Images/HAR/Stage/Drawing/ANI_createcontourstroke1.png',
None], dtype=object) ] | docs.toonboom.com |
STORE CREDIT - USER GUIDE FOR MAGENTO 2
(Version 2.1.0)
INTRODUCTION
Keep the Customers around, that’s what store owners care about! Magento Store Credit module for Magento 2 allows you to enhance the interaction with your Customers by many activities such as adding credit or refunding Customers by credit. Customers can use the credit to make purchases on your store or even share with their friends. With Magento Store Credit module for Magento 2, credit can be used as a convenient and time-saving payment method. Customers just need to recharge their credit accounts one time and then use for many future purchases.
Magento Store Credit is one module in our Omnichannel solution for Magento retailers.
HOW TO CONFIGURE
Path: Store Credit > Settings > Magestore Extension tab > Store Credit
a. Step 1: Configure the following sessions as below:
- General Configuration Tab:
Path: Magento Extension > Store Credit
-Enable Store Credit: to activate Store Credit on your site
-Allow sending Credit: allow customers to send credit to their friends
-Groups can use edit: allow only general/wholesaler/retailer or all customers able to use credits
- Tab.
(1) Select start time for current year: choose Month, then Date
(2) Select date for current month: choose Date
- Style Configuration tab:
(1) Background of Title: enter Hexadecimal code
(2) Color of Title: enter a Hexadecimal code or choose a color as above.
b. Step 2: Remember to click on Save Config button to complete your configuration process.
HOW TO USE
How Admin manages Store Credit
Manage Customers Using Credit
- Path: Store Credit > Manage Customers Using Credit
The Customers Using Credit Manager page will be displayed as below.
This page shows a list of all customers using credit and their information such as name, email, credit balance, telephone, etc.
To view more details about a customer, click on the Edit link in the Action column.
Then, you will be navigated to the Customer Information page. By selecting Store Credit tab, you can view all customer’s transaction history and credit balance as above.
To edit customers’ credit information:
(1) Entering an integer (a positive or negative number)
(2) Add a comment such as why you add credit to customers.
The module will automatically send email to the customer to announce this transaction if you tick on Send email to customer checkbox. The email will be sent to the customer as below.
After you save, our module will auto update the customer’s credit balance, send an email to that customer and create a transaction as above.
Manage Credit Products
- Path: Store Credit > Manage Credit Products
The Credit Product Manager page will be shown as below.
This page shows you all credit products with a lot of information such as product ID, name, SKU, quantity, status, etc.
To add a new credit product:
a. Step 1: Click on the Add Credit Product button on the right top of the page.
b. Step 2: You can add a credit product just in a similar way to adding a normal product.
- Product Details:
(1) Enable Product: Activate Store Credit by select Yes
(2) Attribute Set: Select a set for the new credit product
(3) Product Name: Enter a name for this product
(4) SKU: Enter an SKU name
(5) Quantity: Enter the number of products
(6) Stock Status: Select the current status of product as In Stock / Out of Stock
(7) Categories: Select the categories of the Store Credit. In case there’s no fitted category, click on “New Category”
(8) Visibility: Choose where it will be visible to customers
(9) Set Product as New From: Set a period of times when the product is displayed as New product
(10) Visible on Webpos: Enable Yes to allow the product display on Web POS.
- Credit Prices Settings Tab:
Type of Store Credit value: To configure the value of credit product, choose Fixed Value/ Range of values/ Dropdown values
- Advanced Inventory Path: New Product > Quantity > Advanced Inventory
leaving the box Use Config Settings unchecked.
(5) Maximum Qty. Allowed in Shopping Cart: as mentioned in No.4
Then, click Save to continue
Scroll down and complete the information, click on
, and fill in the following sections as needed:
- Content tab:
These fields are not required but needed to be filled in. Describe your product clearly to help your customers understand your store credit rules.
- Attributes tab
Enter an alternative name for product
- Images and Video tab:
Scroll down to Images and Videos, and click on
, then click on Browse to find or drag image here to upload new image
Click on Add Video to add new video
Fill in the box and click on Choose File to upload new video
- Search Engine Optimization tab
Fill in the required field: URL Key, Meta Title, Meta keywords, Meta Description to improve your SEO work.
- Related Products, Up-Sells, and Cross-Sells tab
(1) Click on respectively
- - Add Related Products
- - Add Up-sell Products
- - Add Cross-sell Products
(2) Mark the checkbox to select products
(3) Click on Add Selected Product
(4) Click on Save to finish
- Customizable Options tab
- Products in Websites tab
Check the box to set credit products on main websites
- Design tab
(1) Layout: Select a suitable layout to display your credit product
(2) Display Product Options in: Select the place of product options: In Block after Info Column or In Product Info Column
(3) Layout Update XML
- Schedule Design Update tab
(1) Schedule Update From: Set the schedule to update your design of products
(2) New Theme: Choose a theme for product pages
(3) New Layout: Choose No layout updates to keep the existing layout or a new layout to display product different from previous design.
- Gift Option tab
Set the allow gift message to Yes or check the Use Config Settings box to allow no Gift message.
- Barcode tab
- Stock Movement tab
Any movement of products will be recorded in this tab.
- Supplier tab
Store credit products are set up based on your stores, so it is not necessary to fill in this tab.
Besides the Credit Product Manager page, you can also create a new credit product by following this path: Products > Inventory Section > Catalog
Manage Credit Transactions and Report Charts
a. Credit Transactions
- Path: Store Credit > Credit Transactions Section > Manage Credit Transactions
The Credit Transactions page will be shown as below.
This page shows all credit-related transactions with a lot of information such as type, detail, customer name/email, added/deducted credit, credit balance after the transaction.
You can search any transaction by using filter boxes in each column.
If you click on a customer’s email, you will be navigated to the Customer Information page.
b. Credit Report Charts
- Path: Store Credit > Credit Transactions Section > Customer Credit Report
Then the Report Charts page will be shown below.
This page can be divided into two main sections including Life-time Reports and Period-of-time Report Charts. - Life-time Reports: There are 2 types of reports. - - Customer Credit Statistics with the total credit, the total spent credits and the number of Customers with credit in your system. - - Top 5 Customers with The Greatest Credit Balances with their names and current balances in your system. - Period-of-time Report Charts: This chart shows you the total spent credits and received credits of all Customers per day in your chosen time range such as last 24 hours, last 7 days, current month, etc.
Use Credit when Creating Orders in Backend
- Path: Sales > Operations section > Orders
On the Create Order page on the backend, our module allows you to use credit when creating orders for customers.
Step 1: Do the steps of creating a new order normally, from creating or selecting customers to selecting products.
Step 2: Enter a credit amount in Customer Credit box and click on the Gray Arrow button
Step 3: Select a shipping method and then look at the Order Totals.
Our module will auto-update and calculate the grand total of the order.
After submitting the order, the customer’s credit balance will also automatically updated and you can check the transaction on the Credit Transaction page.
Refund Orders into Credit Balance
- Path: Sales > Operations section > Orders
When customers want to refund an order, our module allows you to transfer the order value to his credit balance. In that way, customers can use the credit for future purchases and you do not have to lose money for the refund at the same time.
Step 1: Click on View to see the details of an order
Step 2:
On the top bar, click on Credit Memo label to create a refund order
After that, select a warehouse to return stocks and adjust the number of products customers want to return.
Step 3: To adjust refund totals:
.
How Customer uses Store Credit
How Customers buy Credit Product
After customers log in to your website, they can access the Store Credit Products page in two ways:
- Option 1:
On the top navigation bar, click to Buy Store Credit
After that, the Store Credit page will be shown as above.
As you can see, this page lists all Credit Products of your website. There are three types of credit products for customers to choose: fixed values, the range of values and drop-down values.
(1) Fixed Value: These credit products have a fixed value.
(2) Drop-down Values: With this type, customers can select a specific value in the drop-down list.
(3) The range of Values: With this type, customers can choose a desired credit amount within the range configured by admin in the backend.
After selecting credit products they like, customers can add them to cart and checkout normally.
When the order is complete, our module will auto-add that credit amount to the customer’s credit balance.
Customers can also send Credits to their friends by doing the following steps:
(1) Tick Send credit to friend checkbox
(2) Enter the name of the recipient
(3) Enter the email address of the recipient. The system will send an email to this address
(4) Enter the message that recipient will receive.
- Option 2:
Path: My Dashboard page > My Credit tab
To buy credit product, click on the My Credit on the left navigation of the Account Dashboard page.
In this second way, customers will be navigated to the My Credit page on which they just need to click on the Buy store credit button. Then, the Credit Products page will be displayed and customers can continue buying credit as mentioned steps in option 1 above.
When the order is completed, there will be two cases happening based on signup status of the recipient email address.
Case 1: if the recipient does not have an account in the system, an email as above will be sent.
Case 2: if the recipient has already had an account in the system, the system will automatically add that credit amount to the Recipient’s credit balance.
In both cases, the sender always gets email notifications as above.
How to manage Credit on My Credit page
- Path: My Dashboard page > My Credit tab a..**
b. Send Credit to Friends
- Path: My Dashboard page > My Credit tab > Send Credit tab
First, customers should click on the Send Credit tab on the left navigation to go to the Send Credit to Friends page as above.
This page has 2 parts including Send Credit to Friends and Credit Code List.
Send Credit to Friends: allows customers to send credit to their friends by filling in all required information
Credit Code List: shows all information about the credit codes that customers sent to their friends including code, recipient’s email, amount, sent date and status of code. Credit codes are not displayed fully.
(1) Enter recipient’s email
(2) Add an amount that customers want to send to their friends.
(3) Write a message to the recipient.
(4) Click on Send button
Notice that after entering recipient’s email, our module will check that email address and show a notification to customers.
-.
_57<<
In this section, customers can follow the status of the credit codes they sent. While the recipient has not redeemed a credit code, customers are allowed to cancel it by clicking on the Cancel link in the Action column. After the cancellation, the recipient cannot redeem that credit code anymore.
Otherwise, once the credit code has been redeemed, the status will be updated, and the Cancel link will be disabled. Please refer to the section Redeem Credit for more information.
Customers can also above.
At the same time, they will be navigated to the Verify page..
c. Redeem Credit
- Path: My Dashboard page > My Credit tab > Redeem Credit tab to check out by Credit
Customers can use credit to check out on both Shopping Cart and Checkout page.
On the Shopping Cart page, our module will add Apply Credit Discount block for customers to use their credit balances at checkout.
To use a credit amount, customers can:
(1) Enter that number in the field
(2) Click on the Apply button and then our module will auto-update and calculate the grand total of the order.
Please note that customers cannot use credit to buy credit products. If their carts have one or more credit products, our module will show a notification in the Customer Credit block as below.
On the Checkout page, in the Payment Information tab, apply credit discount the same as in the Shopping Cart page.
(1) Enter an amount of credit
(2) Click on the Apply button, and then our module will auto-update the order’s Grand Total.
After the order has been placed, customers’ credit balances will be updated immediately. They can check the current balances and transactions in the Transaction History section.
Release Note
Version 2.1.0 for Magento 2 (released on Oct 19th, 2017)
Compatible with Magento 2.2
Improve integration with WebPOS
Version 2.0.0 for Magento 2 (released on Jun 06, 2017)
Improve payment: checkout via Paypal using store credit & multiple currencies
Update display of customer credit in shopping cart
Improve credit sending email
Re-structure coding
Version 1.0.0 for Magento 2 (released on May 4th, 2016)
Release the stable version for Magento 2.0. | http://docs.magestore.com/Guide%20By%20Functions/Magento%202/Store%20Credit/ | 2018-09-18T17:48:12 | CC-MAIN-2018-39 | 1537267155634.45 | [array(['../SC%20Image/image003.png', 'Storecredit'], dtype=object)
array(['../SC%20Image/image051.png?raw=true', 'Store credit configure'],
dtype=object)
array(['../SC%20Image/image052.png?raw=true', 'Store credit configure'],
dtype=object)
array(['../SC%20Image/image053.png?raw=true', 'Store credit configure'],
dtype=object)
array(['../SC%20Image/image054.png?raw=true', 'Store credit configure'],
dtype=object)
array(['../SC%20Image/image055.png?raw=true', 'Store credit configure'],
dtype=object)
array(['../SC%20Image/image594.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image595.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image596.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image597.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image598.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image599.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image600.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image601.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image602.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image603.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image604.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image605.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image608.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image609.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image611.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image612.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image613.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image614.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image615.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image616.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image617.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image619.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image620.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image621.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image622.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image623.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image624.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image625.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image626.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image627.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image628.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image629.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image630.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image631.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image632.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image633.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image634.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image635.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image636.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image637.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image638.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image639.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image640.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image641.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image642.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image643.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image644.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image645.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image646.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image647.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image648.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image649.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image650.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image651.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image652.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image653.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image654.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image655.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image656.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image657.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image658.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image659.png?raw=true', 'store credit'],
dtype=object)
array(['../SC%20Image/image660.png?raw=true', 'store credit'],
dtype=object) ] | docs.magestore.com |
Bash in Cloud Shell and PowerShell in Cloud Shell (Preview) are subject to information below.
Compute Cost
Azure Cloud Shell runs on a machine provided for free by Azure, but requires an Azure file share to use.
Storage Cost
Cloud Shell requires a new or existing Azure Files share to be mounted to persist files across sessions. Storage incurs regular costs.
Check here for details on Azure Files costs. | https://docs.microsoft.com/en-us/azure/cloud-shell/pricing | 2018-09-18T17:27:37 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.microsoft.com |
Install on Windows¶
Supported versions: 7, 10
System setup¶
Install the USB Camera drivers
Install Horus¶
Execute the installer and follow the wizard. This package contains all dependencies and also Arduino and FTDI drivers.
Reboot the computer to apply the changes.
Note
In Windows 10, if the application is blurred, follow these steps:
- Right click on the application and select Properties
- Go to Compatibility tab.
- Under Settings section, check Disable display scaling on high DPI settings
- Apply and close the window. | https://horus.readthedocs.io/en/release-0.2/source/installation/windows.html | 2018-09-18T18:06:36 | CC-MAIN-2018-39 | 1537267155634.45 | [] | horus.readthedocs.io |
In a previous section (Loading
Sprites) we saw how to add a sprite resource into our game
through loading it as a pre-made graphic. However, GameMaker:
Studio can do much more than that and so this section will take
you through the more advanced options available to you from the
sprite resource window.
At the bottom-left, you can indicate the origin of the sprite,
which is the point in the sprite that corresponds to its position
within the room, ie: when you create an instance at a particular
x/y position, the origin of the sprite is placed there. Default it
is the top left corner of the sprite but it is often more
convenient to use the center, which can be found easily by clicking
the Center button, or some other point on the sprite. You
set the origin manually by clicking in the sprite image which will
move the cross to the point you clicked, or by inputting different
values for x and y in the corresponding boxes. Note that you can
even set an origin outside the area of sprite by using
negative numbers (for left and up) or positive numbers larger than
the sprite width and height (for right and down), which can be very
useful when dealing with objects that need to draw composite
sprites.
The collision checking options are very important ones for your
game, as they will directly influence how your objects interact and
how your game runs, with the wrong settings even having a negative
impact on the over all performance. Why is that? Well, whenever two
instances meet, and both instances have a valid mask, a collision
event is generated by checking the overlap of the mask, which can
either be precise or not, and adapted to the image index or not.
Below is an image to illustrate this:
As you can see, when precise
collisions are involved then they are resolved on a per
pixel basis, and this can be very slow, so if you don't
need them, always have precise collisions turned off! The same rule
of thumb goes for the separate masks option (as long as "precise"
is checked), as this generates a new collision mask for every
single frame of an animated sprite (rather than just apply one
"best fit" average mask for all sub-images). See below for more
information on collision masks.
This section of the sprite properties window deals with how
GameMaker: Studio stores the images that make up your sprite
on texture pages for use with devices and browsers. Now, for
Mac and Windows platforms this is not normally too important, but
when you start to develop for iOS, Android or HTML5 the proper
management of your image assets (textures) becomes very
important as poorly managed textures can have detrimental effect on
your game, causing performance issues.
The Tile:Horizontal and Tile:Vertical check boxes are, by default, not normally checked as most times you do not want to tile sprites. However, in certain circumstances you may wish them to tile, meaning that you should check these options, especially if you are going to be scaling the view or room as scaling can introduce artefacts into the graphics of a game if the texture page is not generated properly.
If your sprite is going to be used as a texture map for a 3D game, then you should check the Used for 3D box and the sprite will be given a texture page all of its own. Note: This will increase the texture memory needs of your game tremendously and so great care must be taken when using this option. Also note that all 3D textures must be a power of 2 (ie> 128x128, 256x256, 512x512 etc...).
Finally, you can chose the texture group that you wish the sprite resource to belong to. Basically, a texture group (previously defined in Global Game Settings: Texture Groups tab) is something that you can set up so that all the image resources that you need for specific rooms or levels of your game can be stored together. So, for example, you can have all your level 1 images in one texture group, all your level 2 images in another etc... and GameMaker: Studio will try to place all those grouped resources on the same texture page to reduce texture page swapping while your game is running on the chosen target platform. Note: This may not always be necessary and performance increase from this method will depend on whether the target device is CPU bound or GPU bound (see Advanced Use: Debugging).
For more detailed information on texture pages and how they are generated, please see More About Backgrounds: Texture Pages.
GameMaker: Studio has a complete suite of tools for editing your sprites, and that includes the order in which their sub-images are displayed, what mask properties they should have and even a graphic editor so you can create your own sprites from scratch! These things are all covered in the sections below:
- Editing Sprites
- Editing Subimages
- Editing Collision Masks
GameMaker: Studio permits you to import graphic assets that have been made in the SWF format. You can find out further details of this feature on the following page:
- Importing Vector Images
Skeletal animations are sprites that have been made with a specialist program (like Spine) which permits bone animation and skinning. Gamemaker: Studio permits you to import this type of sprite and you can find all the details in the following section:
- Importing Skeletal Animations | http://docs.yoyogames.com/source/dadiospice/001_advanced%20use/more%20about%20sprites/index.html | 2018-09-18T17:22:35 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.yoyogames.com |
The Python Tutorial¶.
-
- 9.7. Odds and Ends
- 9.8. Iterators
- 9.9. Generators
- 9.10.. Virtual Environments and Packages
- 13. What Now?
- 14. Interactive Input Editing and History Substitution
- 15. Floating Point Arithmetic: Issues and Limitations
- 16. Appendix | https://docs.python.org/3.7/tutorial/ | 2018-09-18T17:39:19 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.python.org |
Adds a comment to the specified alert.
Note: If you are using markdown, and will be incorporating one or more artifacts in your comment, you must upload the artifacts first. Run the POST /api/boards/items/comments/artifacts operation to upload the first artifact and the POST /api/boards/items/comments/{CommentID}/artifacts operation to upload any subsequent artifacts. Use the values from the responses, such as the BoardItemID and the image paths, to construct the payload for adding the comment.
Authorization Roles/Permissions: Must be an authorized user of the API that the alert relates to.
This topic includes the following sections:
HTTP Method
URL
https://{hostname}/api/alerts/{AlertID}/comments
Sample Request
The example below shows a request to add a comment to the specified alert.
Request URL
https://{hostname}/api/alerts/alert11639.acmepaymentscorp/comments
Sample request headers
Content-Type: application/json Accept: text/plain X-Csrf-Token_{tenant}: {TokenID}
Sample request body
{ "Content":"Thanks for letting us know!!", "UserID":"f778a368-f060-43af-b75f-99d4fb3905ac.acmepaymentscorp" }
Request Headers
For general information on request header values, refer to HTTP Request Headers.
Request Parameters
Response
If successful, this operation returns HTTP status code 200, with the CommentID of the new comment.
Sample Response
The sample response below shows successful completion of this operation.
Sample response headers: application/json
Status Code: 200 OK Content-Type: text/plain Expires: Tue, 12 May 2015 20:05:14 GMT
Sample response body: application/json
4513c498-5055-411d-aa92-79c0f416047. | http://docs.akana.com/cm/api/alerts/m_alerts_commentOnAlert.htm | 2018-09-18T18:36:20 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.akana.com |
Elasticsearch (Search Service)
Elasticsearch is a distributed RESTful search engine built for the cloud.
See the Elasticsearch documentation for more information.
Supported versions
- 0.90
- 1.4
- 1.7
- 2.4
- 5.2
Relationship
The format exposed in the
$PLATFORM_RELATIONSHIPS environment variable:
{ "elasticsearch": [ { "host": "248.0.65.198", "scheme": "http", "port": "9200" } ] }
Usage example
In your
.platform/services.yaml:
mysearch: type: elasticsearch:5.2']); } }
note
When you create an index on Elasticsearch, you should not specify
number_of_shards and
number_of_replicas settings in your Elasticsearch API call. These values will be set automatically based on available resources.
Plugins
The Elasticsearch 2.4 and later services offer a number of plugins. To enable them, list them under the
configuration.plugins key in your
services.yaml file, like so:
mysearch: type: "elasticsearch:5.2" disk: 1024 configuration: plugins: - analysis-icu - lang-python
In this example you'd have the ICU analysis plugin and Python script support plugin.
If there is a publicly available plugin you need that is not listed here, please contact our support team.
Available plugins
This is the complete list of official Elasticsearch plugins that can be enabled: | https://docs.platform.sh/configuration/services/elasticsearch.html | 2017-08-16T19:43:11 | CC-MAIN-2017-34 | 1502886102393.60 | [] | docs.platform.sh |
Table of Contents
You can add a link to local documentation that can help staff create a report template. To add documentation to a report template, click Admin → Local Administration → Reports, and create a new report template. A new field, Documentation URL, appears in the Template Configuration panel. Enter a URL that points to relevant documentation.
The link to this documentation will also appear in your list of report templates. | http://docs-testing.evergreen-ils.org/docs/reorg/staffclient_sysadmin/_template_enhancements.html | 2017-08-16T19:39:21 | CC-MAIN-2017-34 | 1502886102393.60 | [] | docs-testing.evergreen-ils.org |
Magento Open Source, 1.9.x
Shopping Cart
The shopping cart is positioned at the end of the path to purchase, at the intersection of “Buy” and “Abandon”. | http://docs.magento.com/m1/ce/user_guide/sales/shopping-cart.html | 2017-08-16T19:23:24 | CC-MAIN-2017-34 | 1502886102393.60 | [] | docs.magento.com |
Homepage layout options
When it comes to your Docs site, there are two different options for setting up the layout of your home page: Most Popular Articles or Categories. Each have their own unique look and advantages, and can be easily changed under your site settings. This article is all about the home page layout options we offer.
In this article
Home page setting
You can change the setting for the home page by heading to Manage → Docs → Site Settings, and scrolling down to the Site Information section. Under the Show on Home Page drop-down menu, you can select one of the options: Most Popular Articles or Categories.
Most Popular Articles
The Most Popular Articles option gives you the opportunity to show your customers exactly that: your most popular articles. It comes in handy when your customers aren't sure where to start looking for help, or what to search for. Here are some great examples of what that looks like:
If you have one collection and choose the Most Popular Articles option, we will show both categories and your most popular articles. If you have multiple collections, only the article links will be shown.
Categories
The Categories option lets you display all of your categories in blocked off segments to quickly give your customers an idea of exactly which topics they may be after, and have them neatly grouped together. We have some great examples of what that looks like:
| http://docs.helpscout.net/article/723-homepage-layout-options | 2017-08-16T19:23:48 | CC-MAIN-2017-34 | 1502886102393.60 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/524448053e3e9bd67a3dc68a/images/5786184b903360751e7222ea/file-DQP7l4qWHR.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/524448053e3e9bd67a3dc68a/images/5796c14bc6979160ca1473d8/file-E9RuNEWWEo.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/524448053e3e9bd67a3dc68a/images/5796f993c6979160ca147471/file-yD7jqq54tR.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/524448053e3e9bd67a3dc68a/images/5796c1b79033602936039224/file-uxRF3FP2n2.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/524448053e3e9bd67a3dc68a/images/58be4a852c7d3a576d35bb8b/file-3AjYcEByqy.png',
None], dtype=object) ] | docs.helpscout.net |
About the Splunk App for Unix and Linux
The Splunk App for Unix and Linux provides data inputs, searches, reports, alerts, and dashboards for Linux and Unix management. You can monitor and troubleshoot *nix operating systems on potentially large numbers of systems from one place. Included are a set of scripted inputs for collecting CPU, disk, I/O, memory, log, configuration, and user data.
Use the Splunk App for Unix and Linux to:
- Get information about who's logged into your system, including last login times and unauthorized login attempts.
- Find out how much network throughput and bandwidth your system is using.
- Determine the status of current running processes on your system, and who is running them.
- Learn what software is installed on your system.
How does it work?
The Splunk App for Unix and Linux runs on top of a Splunk instance and gathers various system metrics, including:
-.
The app presents this data to you with pre-built reports and dashboards to give you full visibility into your system's operation.
App Features
Central Visibility Into Operational Health
Get instant visibility into the operational health of Unix and Linux environments. Organize your hosts by groups of services specific to your environment. Use NOC-like dashboards for central insight into problems and visualize resource consumption of selected systems for easy detection of outliers and anomalies.
Performance and Resource Utilization Analytics
Set multiple customizable thresholds for your CPU and memory utilization across your groups of hosts to easily spot trends and spikes in resource utilization in your infrastructure. Isolate problems with configurable statistical comparisons, using 42 important host and OS metrics. Visualize trends and display side-by-side performance comparisons of the several hosts of interest to understand trends, establish baselines and optimize resource allocations. Quickly cross-compare CPU, RAM and disk historical capacity utilization across many different hosts to identify increased resource consumption.
Threshold-Based Alerts
Get real-time notifications of important events from your Unix and Linux environment using pre-packaged threshold-based alerts. Quickly assess the business impact of events and conduct remediation actions through insight into snapshots of various OS metrics around the time-specific alert fired. Compare the behavior of hosts in your systems and create long-term trends based on the alerts activity in your environment.
Correlation Across Technologies
Combine your OS data with data from all other technology tiers, such as applications, virtual, storage, networks and servers to gain a complete, centralized view of KPIs across your enterprise. Use Splunk search language, visualizations and correlations to find causal links across technologies. Get an accurate picture of resource usage and performance across multiple tiers of your IT stack.
Common Information Model Compatibility
Accelerate your deployment of new apps, users, data sources and features by utilizing this app’s compatibility with the Splunk Common Information Models (CIM). CIM compatibility enables quick time-to-value, as it allows for fast correlation of events from disparate technologies by Splunk apps such as Splunk Enterprise Security and the Splunk App for PCI Compliance.
How do I get it?
Download the Splunk App for Unix and Linux from Splunkbase.
How do I upgrade from a previous version?
From version 5.0.x
You can upgrade directly from version 5.0 of the Splunk App for Unix and Linux through Splunk's in-app upgrade feature within Splunk Web, or from the command line.
From version 4.6.x and earlier
There is no supported upgrade path from version 4.6 of the Splunk App for Unix and Linux to this version. However, you can run both version 4.6 and this version simultaneously, if you so choose.
The installation package for this version of the app installs into a different directory than version 4.6. Once you have installed this version, you can then configure this version of the app to use the same indexes and source types that the version 4.6 app uses.
For detailed installation instructions, read "Install the Splunk App for Unix and Linux" in this manual.
Caution: Do not attempt to install this version of the app into the same directory of a version before 5.0. That is not supported and can render both versions of the app unusable.
Once you have configured and evaluated this version of the app, you can then remove the 4.6 version at a later date. No data loss will occur.
For information on any known issues in this version, review the release notes.
This documentation applies to the following versions of Splunk® App for Unix and Linux: 5.2.2
Feedback submitted, thanks! | http://docs.splunk.com/Documentation/UnixApp/5.2.2/User/AbouttheSplunkAppforUnix | 2017-08-16T19:29:23 | CC-MAIN-2017-34 | 1502886102393.60 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
New in version 2.3.
- base64
- hashlib
# Ensure pinky is present - ipa_user: name: pinky state: present givenname: Pinky sn: Acme mail: - [email protected] telephonenumber: - '+555123456' sshpubkeyfp: - ssh-rsa .... - ssh-dsa .... ipa_host: ipa.example.com ipa_user: admin ipa_pass: topsecret # Ensure brain is absent - ipa_user: name: brain. | http://docs.ansible.com/ansible/latest/ipa_user_module.html | 2017-08-16T19:41:46 | CC-MAIN-2017-34 | 1502886102393.60 | [] | docs.ansible.com |
- Plexus »
- Effector Objects »
Shade Effector
Use Lights in the composition to shade vertices (not Facets/Triangles) in the Plexus.
Shading Lights name starts with: Only Lights that begin with this name will considered by the effect. All the other lights in the composition are ignored.
Ambience: The amount of color to be inherited by all the points, irrespective of their location and proximity from a point light. For example if the value is 10%, all the points receive 10% of the color from every shading light in the composition irrespective of their distance from the light.
Effect Only Group: Only vertices in this group will be affected by this effector. If “All Groups” is selected all points in the Plexus are affected. | http://docs.rowbyte.com/plexus/effector_objects/shade_effector/ | 2017-08-16T19:20:59 | CC-MAIN-2017-34 | 1502886102393.60 | [array(['../images/shade_effector_object.png', 'Shade Effector Object'],
dtype=object) ] | docs.rowbyte.com |
Deploy MinIO on Docker Compose
Docker Compose allows defining and running single host, multi-container Docker applications.
With Compose, you use a Compose file to configure MinIO services. Then, using a single command, you can create and launch all the Distributed MinIO instances from your configuration. Distributed MinIO instances will be deployed in multiple containers on the same host. This is a great way to set up development, testing, and staging environments, based on Distributed MinIO.
1. Prerequisites
- Familiarity with Docker Compose.
- Docker installed on your machine. Download the relevant installer from here.
2. Run Distributed MinIO on Docker Compose
To deploy Distributed MinIO on Docker Compose, please download docker-compose.yaml to your current working directory. Note that Docker Compose pulls the MinIO Docker image, so there is no need to explicitly download MinIO binary. Then run one of the below commands
GNU/Linux and macOS
docker-compose pull docker-compose up
Windows
docker-compose.exe pull docker-compose.exe up
Each instance is now accessible on the host at ports 9001 through 9004, proceed to access the Web browser at Compose deployment. To add a service
- Replicate a service definition and change the name of the new service appropriately.
- Update the command section in each service.
- Update the port number to exposed for the new service. Also, make sure the port assigned for the new service is not already being used on the host.
Read more about distributed MinIO here.
MinIO services in the Docker compose file expose ports 9001 to 9004. This allows multiple services to run on a host. | https://docs.min.io/docs/deploy-minio-on-docker-compose | 2019-05-19T17:21:31 | CC-MAIN-2019-22 | 1558232255071.27 | [array(['https://img.shields.io/docker/pulls/minio/minio.svg?maxAge=604800',
'Docker Pulls'], dtype=object) ] | docs.min.io |
- is a great way to quickly publish content from your portal to your public website, or if you're a district site editor to other schools' portals.
Sync Overview
School site editors can easily sync content to your website. District site editors can also sync content to one or more schools in your district.
Only the following content can be sync'd from the home page of your school or district site:
- Events
- Special Announcements
- Announcements
- Documents (Publications)
- Featured Stories
Sync Delay
If you are not signed into your public website it can take a few minutes for sync'd items and updates to appear.
Sync'd items cannot be edited from their destination, only the source item can be changed. This is to avoid multiple authors overwriting changes.
Sync Content from your School Portal
When you add or edit an item which can be sync'd you will see a Display on Web checkbox.
- Check Sync to Web and your content will be sync'd to your public website.
- Save your item and it will appear on the destination sites in a few moments.
Sync Content from your District Portal
When you add or edit an item which can be sync'd you will see a Sync To field and Display on Web checkbox.
- In the Sync To field start typing the name of one or more destination schools or click the tag icon to select schools. Presets tags such as All Elementary Schools and All High Schools are also available.
- Check Sync to Web and your content will also be sync'd to the public websites of the select schools.
- Save your item and it will appear on the destination sites in a few moments.
Sync to Websites Only
If you only wish to sync items to school websites, and not portals, please access the sync features from your district website which allow this.
Sync Announcements
Sync Documents (Publications)
You can sync documents from one site to other sites, by first uploading them and then using Edit Properties from the item menu.
To sync a document to a school site or to the web:
- With your web browser, navigate to the document library that the document is in.
- Select the document you would like to sync to a different site.
- Click the ellipsis beside the document name, and in the pop-up, select the ellipsis again.
- Select Edit Properties from the drop-down menu to access the properties dialog.
- If you would like to make the document publicly available via your website, select the Sync To Web check box. Otherwise, leave it clear.
- To sync the document from a district to a school site, click the label icon to the right of the Sync To field.
- Select the school you would like to sync the document to. You can also choose All Schools, or categoeis of schools. Double-click on the name to select it, or highlight the name and click the Select >> button.
- Click the OK button.
- In the properties dialog, the school name will be populated into the Sync To field.
- Click the Save button to sync the document and return to the document library. | https://docs.scholantis.com/display/PUG2013/Sync+Content+to+School+Portals+and+Websites | 2019-05-19T17:43:45 | CC-MAIN-2019-22 | 1558232255071.27 | [] | docs.scholantis.com |
Example of Building a Complex Pipeline
The code sample
metavision_composed_viewer that can be found in the Core module will be used here
to show how to build a more complex pipeline with non-linear connections (e.g. multiple inputs and multiple outputs).
We don’t implement any custom stage in this sample: this example includes only stages that already exist in Metavision
SDK.
The sample demonstrates how to acquire events data (from a live camera or a RAW file), filter events and show a frame combining unfiltered and filtered events. It also shows how to set a custom consuming callback on a cpp:class:FrameCompositionStage <Metavision::FrameCompositionStage> instance, so that it can consume data from multiple stages.
The pipeline can be represented by this graph:
The sample includes only the main function that creates the pipeline and connects the stages:
Instantiating a Pipeline
First, we instantiate
Pipeline
using
Pipeline::Pipeline(bool auto_detach) constructor:
Metavision::Pipeline p(true);
We pass
auto_detach argument as true to make the pipeline run all stages in their own processing threads.
Adding Stages to the Pipeline
Once the pipeline is instantiated, we add stages to it.
As the first stage, we add the
CameraStage used to produce CD events from a camera
or a RAW file:
auto &cam_stage = p.add_stage(std::make_unique<Metavision::CameraStage>(std::move(cam), event_buffer_duration_ms));
Then, we add the
PolarityFilterAlgorithm to filter events and keep only
events of the given polarity.
auto &pol_filter_stage = p.add_algorithm_stage(std::make_unique<Metavision::PolarityFilterAlgorithm>(1), cam_stage);
Then, we add two
FrameGenerationStage stages to generate frames from
the output of the two previous stages. Note that we call twice the same function, with the only difference being the
previous stage:
cam_stage for the first and
pol_filter_stage for the second.
auto &left_frame_stage = p.add_stage( std::make_unique<Metavision::FrameGenerationStage>(width, height, display_fps, true, accumulation_time_ms), cam_stage); auto &right_frame_stage = p.add_stage( std::make_unique<Metavision::FrameGenerationStage>(width, height, display_fps, true, accumulation_time_ms), pol_filter_stage);
Then, we add the
FrameCompositionStage.
This stage can be used to generate a single frame showing side by side the output of two different producers,
in this case
left_frame_stage and
right_frame_stage.
Here, we specify the previous stages with
FrameCompositionStage::add_previous_frame_stage
function:
auto &full_frame_stage = p.add_stage(std::make_unique<Metavision::FrameCompositionStage>(display_fps)); full_frame_stage.add_previous_frame_stage(left_frame_stage, 0, 0, width, height); full_frame_stage.add_previous_frame_stage(right_frame_stage, width + 10, 0, width, height);
Finally, as the last stage, we add the
FrameDisplayStage to display the final
combined frame on the screen.
auto &disp_stage = p.add_stage(std::make_unique<Metavision::FrameDisplayStage>("CD & noise filtered CD events"), full_frame_stage);
Running the Pipeline
Now, when the pipeline is set up, and all stages are added, we run the pipeline by calling
Pipeline::run() function:
p.run();
Note how, compared to the previous example, here we do not need to process user
inputs at each step of execution, so we can use
run() instead of
step(). | http://docs.prophesee.ai/stable/metavision_sdk/pipeline/pipeline_metavision_composed_viewer.html | 2022-01-16T21:32:50 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['../../_images/pipeline_composed_viewer.jpg',
'../../_images/pipeline_composed_viewer.jpg'], dtype=object)] | docs.prophesee.ai |
Comment command
Use this command to insert comments in your automation tasks to provide additional information about the TaskBot / MetaBot Logic.
Overview
The Comment command is useful for annotating Logic steps. Comments are ignored when the Logic runs. Some people use comments to extensively document Logic details, whereas others use just a few comments as reminders.
A comment is displayed in green in the Task Actions List, and is always saved as a single line. Multiple-line comments are displayed as a single line when the comment is saved. | https://docs.automationanywhere.com/de-DE/bundle/enterprise-v11.3/page/enterprise/topics/aae-client/bot-creator/commands/comment-command.html | 2022-01-16T21:37:06 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.automationanywhere.com |
Irker IRC Gateway
GitLab provides a way to push update messages to an Irker server. When configured, pushes to a project trigger the service to send data directly to the Irker server.
See the project homepage for further information.
Needed setup
You first need an Irker daemon. You can download the Irker code from its repository:
git clone
Once you have downloaded the code, you can run the Python script named
irkerd.
This script is the gateway script, it acts both as an IRC client, for sending
messages to an IRC server, and as a TCP server, for receiving messages
from the GitLab service.
If the Irker server runs on the same machine, you are done. If not, you need to follow the first steps of the next section.
Complete these steps in GitLab
- Navigate to the project you want to configure for notifications.
- Navigate to the Integrations page
- Click “Irker”.
- Ensure that the Active toggle is enabled.
- is prepended to each and every channel provided by the user which is not a full URI.
- Specify the recipients (e.g. #channel1, user1, etc.)
- Save or optionally click “Test Settings”.
Note on Irker recipients
Irker accepts channel names of the form
chan and
#chan, both for the
#chan channel. If you want to send messages in query, channel name. When using this feature remember to
not put the
# sign in front of the channel name; failing to do so
results in Irker joining a channel literally named
#chan?key=password henceforth
leaking the channel key through the
/whois IRC command (depending on IRC server
configuration). This is due to a long standing Irker bug. | https://docs.gitlab.com/13.12/ee/user/project/integrations/irker.html | 2022-01-16T21:36:38 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.gitlab.com |
Redivis Documentation
API Documentation
Redivis Home
Search…
Introduction
Guides
Getting started
Creating a dataset
Creating a project
Applying for access
Setting up an organization
Reference
Your account
Datasets
Projects
Core concepts
Transforms
Tables
Notebooks
Value lists
Sharing and privacy
Examples
Data access
Organization administration
Export and integrations
Glossary
Additional Resources
API
Redivis Labs
Video tutorials
Roadmap
Security
System status
See what's new
Report a bug
Feature requests
Q&A
GitBook
Core concepts
Overview
It might be useful to think of a project as a large folder. It contains datasets, alongside transforms – which query those dataset tables – and their output tables, and any notebooks used to analyze your outputs. In a Redivis project, these entities are visually arranged to make it easy to see how your transforms, tables, and notebooks are related to each other, and to make it easier to make changes that affect your whole project.
The left half of the screen is where you'll see all entities that currently exist in your project. By default this is laid out as a tree of connected nodes to better understand connections between tables. You can also switch this view to a list. If you created this project from a dataset you'll see a rectangle with the dataset name next to it to start.
Node types
Each shape, or node, on the project tree represents a different entity in your project.
Dataset
nodes
are datasets across Redivis.
Table
nodes
are either dataset tables, or the resulting output table of the upstream transform.
Transform
nodes
are queries that don't contain any data themselves which are used to shape data by creating new tables.
Notebook
nodes
are code blocks (and their outputs) which are used to analyze data.
If you ever get lost, you can use the
Search
button in the left of the black menu bar and input the name of a node to jump to it.
Datasets
A dataset node is a copy of a
dataset
in a project.
Dataset nodes display a list of the tables they contain. You can click on any table to view its contents, or click "Query" to build a transform on it.
Samples
Some large datasets have 1%
samples
which are useful for quickly testing querying strategies before running transforms against the full dataset.
If a 1% sample is available for a dataset, it will automatically be added to your project by default instead of the full sample. Samples are indicated by the dark circle icon to the top left of a dataset node in the left panel and in the list of the dataset's tables.
All sampled tables in the same dataset will be sampled on the same variable with the same group of values (so joining two tables in the same dataset with 1% samples will still result in a 1% sample).
To switch to the full sample, click "Sample" button in the top right of the menu bar when you have a dataset selected.
Your downstream transforms and tables will become stale, since an upstream change has been made. You can run these nodes individually to update their contents, or use the run all functionality by clicking on the project's name in the top menu bar.
Versions
When a new
version
of a dataset is released by an administrator, the corresponding dataset node on your project minimap will become purple. To upgrade the dataset's version, click the "Version" button in the top right of the menu bar when you have a dataset selected.
You can view version diffs and select whichever version you want to use here.
After updating, your downstream transforms and tables will become stale. You can run these nodes individually to update their contents, or use the run all functionality by clicking on the project's name in the top menu bar.
Tables
A dataset table refers to a single
table
(a unique set of rows and colums) that was uploaded to the dataset by the owner. These tables are shown directly underneath the dataset when you create a transform or notebook from that table.
An output table is automatically created when you create a
transform node
. Running the transform generates the data in this output table.
All table nodes have one upstream parent. You can view the table's data and metadata similarly to other
tables
on Redivis. You cannot edit or update the metadata here.
You can create multiple transforms to operate on a single table, which allows you to create various branches within your project. To create a new transform, select the table or dataset and click the small + icon that appears under the bottom right corner of the node.
Sanity check output
After you run a transform, you can investigate the downstream output table to get feedback on the success and validity of your querying operation – both the filtering criteria you've applied and the new features you've created.
Understanding at the content of an output table allows you perform important sanity checks
at each step of your research process, answering questions like:
Did my filtering criteria remove the rows I expected?
Do my new variables contain the information I expect?
Does the distribution of values in a given variable make sense?
Have I dropped unnecessary variables?
To sanity check the contents of a table node, you can inspecting the general
table
characteristics, checking the
summary statistics
of different variables, looking at the table's
cells
, or create a
notebook
for more in-depth analysis.
Transforms
Transforms are at the core of every project, allowing for comprehensive data merges and transformations. Learn more about building transforms in the
Transform documentation
.
Create a new transform by clicking the + button beneath any table. Transforms can only reference tables that are present in this project.
To copy a transform, right click the transform and select
Copy transform
. This will copy the transform, including all parameters specified in the detail view, and allows you to insert it somewhere else in your project tree, to re-use querying logic. Note that tables cannot be copied alone; copying a transform node will copy the transform
and
its downstream table.
You can also insert a transform between two tables by right-clicking on another transform.
Notebooks
Notebook nodes allow you to work with data in a Jupyter notebook interface, taking advantage of the open-source community and scientific computing toolkit available in Python, R, Stata, or SAS. Learn more about using notebooks in the
Notebooks documentation
.
Create a notebook by clicking the + button beneath any table. Notebooks can only reference tables that are present in this project.
To copy a notebook, right click and select
Copy notebook
. You can paste the copied notebook by right-clicking on the background of the project's tree view to the left, and selecting
Paste copied notebook
.
Node layout
The project tree automatically creates a grid layout of all the nodes in your project, helping to keep it organized as your project grows.
Sometimes, you may wish to reorganize certain nodes in your project. To shift a dataset or notebook node, hover and click the arrow to the side of the node.
This will move the node to the right or left, and reorganize your tree according to the new horizontal order of datasets at the top of your project tree (or notebooks at the bottom of your project tree). Note that shifting nodes is purely an organizational tool; it has no effect on the data produced in the project.
Node states
Empty
Display
: White background
A transform node will be white if it has never been run.
A notebook or table node will be white if it contains no data.
Executed
Display
: Grey background
A transform will be grey when it has previously been run and has not since been edited or had anything change upstream.
A notebook or table node will be grey if it contains data, and no upstream transforms have been edited (if there
was
an upstream change, everything downstream would be
stale
)
Invalid
Display
: Black exclamation icon
A transform will be invalid when it is unable to be run. This might be because you haven't finished building the steps, or because something changed upstream which made its current configuration impossible to execute again.
Errored
Display
: Red exclamation icon
A transform will be errored when you run them and the run can't be completed. This might be due to an incorrect input you've set that our validator can't catch. Or something might have gone wrong while executing and you'll just need to rerun it.
Edited
Display
: Yellow background with diagonal hash lines
A transform will be edited when you revisit a successfully run transform and change a parameter. You can either
Run
this transform or
Revert
to its previously run state to resolve it. Editing a transform makes any downstream nodes stale.
Stale
Display
: Yellow background
A transform, table, or notebook will be stale when an upstream change has been made. For tables and notebooks immediately downstream from an edited node, means that the data contents might no longer be the results of the previous transform.
You'll need to re-run any edited upstream transforms to propagate new data into downstream tables and nodes, or revert an upstream edited node to return to the previously
executed
state.
Running and queued
Display
: Double arrows rotating
Transforms have this icon when the node is currently being run (if the icon is spinning) or it is queued to run after upstream nodes have finished running (icon isn't moving).
You can cancel queued and running on each individual transform or by clicking the
Run
menu in the top bar and selecting
Cancel all
. If a node is currently running it might not be able to cancel, depending on what point in the process it's at.
Incomplete access
Display:
All black background, or dashed borders
For all nodes, this means that you don't have full access the node. Click on these nodes and then the
Incomplete access
button to begin applying for access to the relevant datasets.
Sampled
Display:
Black circle with 1% icon
For datasets this means that you are using a 1%
sample
of the data. When a dataset has a sample, it will automatically default to it when added to a project. You can change this to the full sample and back at any time in the
dataset node
.
Outdated version
Display:
Purple background
For datasets this means that you are not using the latest
version
. This means that you have either intentionally switched to using an older version, or that this dataset's administrator has released a new version that you can
upgrade
to.
Working in bulk
At any point you might realize that you need to change a parameter of a query that will affect man downstream tables. This will make these tables
stale
and you'll see their color turn to yellow on the map.
After finishing your updates you can run each transform individually to propagate changes or you can use the Run button in the top menu to run many nodes in sequence. This menu gives you the option to run all stale nodes, or all downstream or upstream nodes (from the node you have selected).
Deleting nodes
To delete a node, right click on a dataset or transform node and select
.
When deleting a transform, the transform
and
output table will be deleted; every transform must have an output table to record results of that transform . If the project tree has additional nodes downstream, the transform and output table will be 'spliced' out, i.e. the upstream node nearest the deleted transform will be connected to the downstream node nearest to the deleted output table. Note that this deletion will cause the next downstream transform to receive new input variables from the node that's directly upstream. (In the above example, deleting the selected transform will result in the 'Optum SES Inpatient Confinement' dataset being connected directly to the remaining transform, which will change the variables available to work with in that transform.)
When deleting a dataset or dataset table, the dataset
and all downstream nodes
will be deleted. If additional branches are joined into the branch downstream of the deleted dataset, those branches will be retained up to but not including the transform located in the deleted branch.
Since you can't undo a deletion, you'll receive a warning message before proceeding.
As you make changes in a project you will change the status of different nodes connected to it. These changes in status are shown in the left panel of the project to help you keep track of any changes.
Reference - Previous
Projects
Transforms
Last modified
1mo ago
Export as PDF
Copy link
Contents
Overview
Node types
Datasets
Tables
Transforms
Notebooks
Node layout
Node states
Empty
Executed
Invalid
Errored
Edited
Stale
Running and queued
Incomplete access
Sampled
Outdated version
Working in bulk
Deleting nodes | https://docs.redivis.com/reference/projects/core-concepts | 2022-01-16T22:30:09 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.redivis.com |
In a right outer join, the rows from the right table that are returned in the result of the inner join are returned in the outer join result and extended with nulls.
Inner/Outer Table Example
The following example uses the explicit table names inner_table and outer_table to indicate how these terms relate to the way a simple right outer join is constructed in the FROM clause of a SELECT statement. See Inner Table and Outer Table.
The example shows the semantics of inner and outer table references for a right outer join.
inner_table RIGHT OUTER JOIN outer_table
Section 1 represents the inner join (intersection) of outer_table and inner_table. Section 3 represents the unmatched rows from the outer table.
The outer join result contains the matching rows from Sections 2 and 3, indicated in the diagram as Section 1, plus the unmatched rows from Section 3, noted in the graphic by the more darkly shaded component of the Venn diagram.
In terms of the algebra of sets, the result is:
(Table_A ∩ Table_B) + (Table_B - Table_A)
where:
Table_A ∩ Table_B is the set of matched rows from the inner join of Table_A and Table_B.
Table_B - Table_A is the set of unmatched rows from Table_B.
Practical Example of a Right Outer Join
When you perform a right outer join on the offerings and enrollment tables, the rows from the right table that are not returned in the result of the inner join are returned in the outer join result and extended with nulls.
This SELECT statement returns the results in the following table:
SELECT offerings.course_no, offerings.location, enrollment.emp_no FROM offerings RIGHT OUTER JOIN enrollment ON offerings.course_no = enrollment.course_no;
BTEQ reports represent nulls with the QUESTION MARK character.
These results show that course C100 has two employees enrolled in it and that employee 236 has not enrolled in another class. But in this case the nulls returned by the right outer join of the offerings and enrollment tables are deceptive, because we know by inspection of the enrollment table that employee 236 has enrolled for course C300. We also know by inspection of the offerings table that course C300 is not currently being offered.
For more informative results, use the following right outer join:
SELECT enrollment.course_no,offerings.location,enrollment.emp_no FROM offerings RIGHT OUTER JOIN enrollment ON offerings.course_no = enrollment.course_no;
This query returns the row (C300, Null, 236), not (Null, Null, 236). | https://docs.teradata.com/r/FaWs8mY5hzBqFVoCapztZg/ygPeLxVgyxcbModWBx1B2w | 2022-01-16T22:02:12 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.teradata.com |
Introduction
Like Codox for LFE. Check out the self-generated documentation.
Installation
First, make sure you have the lfe-compile plugin as a dependency in your
project's
rebar.config or, better yet, in the the global rebar3 config,
~/.config/rebar3/rebar.config:
{plugins, [{'lfe-compile', ".*", {git, "git://github.com/lfe-rebar3/compile.git", {tag, "0.2.0"}}}]}
Then in your project's
rebar.config, include the provider pre-hook:
{provider_hooks, [{pre, [{compile, {lfe, compile}}]}]}
Finally, add Lodox to your
plugins list:
{plugins, [% ... {lodox, ".*", {git, "git://github.com/quasiquoting/lodox.git", {tag, "0.5.0"}}}]}.
The recommended place for the Lodox plugin entry is the global rebar3 config,
~/.config/rebar3/rebar.config,
but it works at the project level, too.
Usage
In order for Lodox to work, your project must first be compiled:
rebar3 compile
Then, to invoke Lodox, simply run:
rebar3 lodox
Alternatively, you can
do both at once:
rebar3 do compile, lodox
If all goes well, the output will look something like:
Generated lodox v0.5.0 docs in /path/to/lodox/doc
And, as promised, generated documentation will be in the
doc subdirectory of
your project.
Optionally, you can add Lodox as a
compile post-hook:
{provider_hooks, [{pre, [{compile, {lfe, compile}}]}, {post, [{compile, lodox}]}]}.
License
Lodox is licensed under the MIT License.
The MIT License (MIT) Copyright © 2015 Eric Bailey <quasiquoting.
Significant code and inspiration from Codox. Copyright © 2015 James Revees
Codox is distributed under the Eclipse Public License either version 1.0 or (at your option) any later version. | https://lodox.readthedocs.io/en/latest/ | 2022-01-16T22:40:50 | CC-MAIN-2022-05 | 1642320300244.42 | [] | lodox.readthedocs.io |
Draw Borders (Lines) for Report Controls
- 2 minutes to read
To draw borders for a control, locate the Borders property in the control’s Properties window. Then, select which borders to draw: Left, Right, Top, and/or Bottom.
The following example shows how to draw Top and Bottom borders for the XRLabel control:
The specified borders are displayed both in the Designer and Print Preview:
Use the BorderColor, BorderDashStyle, and BorderWidth properties to customize the border appearance.
Draw Borders for Tables
You can set borders for a table, its rows and its cells. The following example demonstrates a table in the Designer with Top and Bottom borders enabled:
The following image shows the layout of the above table in Print Preview:
If a table occupies the entire Detail band and the table’s top and bottom borders are enabled, the table borders are be duplicated in Print Preview. To avoid duplication, do the following:
- Select only a bottom table border.
- Create a group header band.
- Add a table header to the newly created band.
- Enable the header’s top and bottom borders.
Designer. The table occupies the entire Detail band. Both Top and Bottom borders of the table and the header are enabled.
Print Preview. The top and bottom borders of the table’s rows overlap and are duplicated.
Designer. The table occupies the entire Detail band. Only the Bottom border of the table is enabled.
Print Preview. Only the bottom border of the table is displayed. The top border of the table’s first row matches the bottom border of the header.
Draw Common Border for Multiple Controls
To draw a common border for several controls, place the controls inside a table with one cell, and then specify the table borders:
| https://docs.devexpress.com/XtraReports/402369/detailed-guide-to-devexpress-reporting/use-report-controls/manipulate-report-controls/draw-borders-for-report-controls | 2022-01-16T22:43:55 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['/XtraReports/images/label-borders-example.png', None],
dtype=object)
array(['/XtraReports/images/label-borders-example-customized.png', None],
dtype=object)
array(['/XtraReports/images/table-borders-example-1.png', None],
dtype=object)
array(['/XtraReports/images/table-borders-example-1-preview.png', None],
dtype=object)
array(['/XtraReports/images/table-duplicated-borders-designer.png',
'**Designer**. The table occupies the entire **Detail** band. Both **Top** and **Bottom** borders of the table and the header are enabled.'],
dtype=object)
array(['/XtraReports/images/table-duplicated-borders-preview.png',
"**Print Preview**. The top and bottom borders of the table's rows overlap and are duplicated."],
dtype=object)
array(['/XtraReports/images/table-duplicated-borders-designer-bottom.png',
'**Designer**. The table occupies the entire **Detail** band. Only the **Bottom** border of the table is enabled.'],
dtype=object)
array(['/XtraReports/images/table-bottom-border.png',
"**Print Preview**. Only the bottom border of the table is displayed. The top border of the table's first row matches the bottom border of the header."],
dtype=object)
array(['/XtraReports/images/table-borders-example.png', None],
dtype=object) ] | docs.devexpress.com |
ngraph.gru_sequence¶
- ngraph.gru_sequence(X: Union[_pyngraph.Node, int, float, numpy.ndarray], initial_hidden_state: Union[_pyngraph.Node, int, float, numpy.ndarray], sequence_lengths: Union[_pyngraph.Node, int, float, numpy.ndarray], W: Union[_pyngraph.Node, int, float, numpy.ndarray], R: Union[_pyngraph.Node, int, float, numpy.ndarray], B: Union[_pyngraph.Node, int, float, numpy.ndarray], hidden_size: int, direction: str, activations: List[str] = None, activations_alpha: List[float] = None, activations_beta: List[float] = None, clip: float = 0.0, linear_before_reset: bool = False, name: Optional[str] = None) _pyngraph.Node ¶
Return a node which performs GRUSequence operation.
- Parameters
X – The input tensor. Shape: [batch_size, seq_length, input_size].
initial_hidden_state – The hidden state tensor. Shape: [batch_size, num_directions, hidden_size].
sequence_lengths – Specifies real sequence lengths for each batch element. Shape: [batch_size]. Integer type.
W – Tensor with weights for matrix multiplication operation with input portion of data. Shape: [num_directions, 3*hidden_size, input_size].
R – The tensor with weights for matrix multiplication operation with hidden state. Shape: [num_directions, 3*hidden_size, hidden_size].
B – The sum of biases (weight and recurrence). For linear_before_reset set True the shape is [num_directions, 4*hidden_size]. Otherwise the shape is [num_directions, 3*hidden_size].
hidden_size – Specifies hidden state size.
direction – Specifies if the RNN is forward, reverse, or bidirectional.
activations – The list of three activation functions for gates.
activations_alpha – The list of alpha parameters for activation functions.
activations_beta – The list of beta parameters for activation functions.
clip – Specifies bound values [-C, C] for tensor clipping performed before activations.
linear_before_reset – Flag denotes if the layer behaves according to the modification of GRU described in the formula in the ONNX documentation.
name – An optional name of the output node.
:return:: The new node represents GRUSequence. Node outputs count: 2. | https://docs.openvino.ai/latest/api/ngraph_python_api/_autosummary/ngraph.gru_sequence.html | 2022-01-16T21:49:45 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.openvino.ai |
Register and Activate Enterprise DLP on Prisma Access
Complete the task to register and activate DLP on Prisma Access.
DLP on Prisma Access enables you to secure remote networks and users, and requires an add-on license. You can either purchase a license or try the 60-day trial.
When you request a trial from the web interface, you must wait 24 hrs for the request to be processed. After the 60-day trial is approved, Palo Alto Networks lets you try the product for 60 days, along with a 30-day grace period to allow you to purchase the license. Palo Alto Networks deactivates DLP on Prisma Access 90 days after the start of the trial if you do not purchase a license.
When you purchase a license, all you need to do it activate it in this workflow. The welcome email that you receive when you purchase Enterprise DLP includes an auth code. Please disregard the auth code in the email. The auth code in the email is automatically processed for you, all you need to do is follow the instructions in this workflow.
To register and activate DLP on Prisma Access, complete the following steps.
If you have existing data patterns and data filtering profiles in a Prisma Access-specific device group (
Service_Conn_Device_Group,
Remote_Network_Device_Group, or
Mobile_User_Device_Group), the patterns and profiles will be removed after you register and activate DLP on Prisma Access.
- Check the minimum content version on the Panorama appliance on which you will install DLP on Prisma Access, and upgrade it if required.The minimum required content version is 8190.If you have DLP on Prisma Access enabled for more than one Prisma Access instance in a single Customer Service Portal (CSP) account, data filtering profiles are synchronized across all instances. This behavior can result in unexpected consequences; for example, the deletion of a custom data pattern or data filtering profile for one instance does not delete that pattern or profile for other instances in the CSP account. For this reason, Palo Alto Networks recommends that you move each Prisma Access instance to its own CSP account.
- Activate and install Prisma Access and configure your settings for the Prisma Access service infrastructure; then, configure your mobile users deployment, your remote networks deployment, or both, depending on your Prisma Access license.Skip this step if you’ve already configured Prisma Access.
- Perform the following pre-checks to make sure that your environment is ready to request Enterprise DLP on Prisma Access:
- Be sure that Panorama can access thedss.paloaltonetworks.comURL.Add this URL to the allow list on any security appliance that you use with the Panorama appliance. In addition, if your Panorama appliance uses a proxy server (), or if you use SSL forward proxy with Prisma Access, be sure to addPanoramaSetupServiceProxy Serverdss.paloaltonetworks.comto the allow list on the proxy server.
- If you are using the same parent device group for on-premise firewalls and Prisma Access firewalls, and would like to use the parent device group to configure security policy rules, open a command-line interface (CLI) session in Prisma Access and enter therequest plugins cloud_services prisma-access dlp-enable-config-in-sharedcommand. This command makes a copy of the data filtering profile in theShareddevice group that can be read by the on-premise firewalls.If you do not enter this command, you cannot refer to the data filtering profiles with Enterprise DLP in non-Prisma Access device groups, because the Enterprise DLP data filtering profiles are only available in the Prisma Access device group.
- Selectand verify that thePanoramaAdministrators__cloud_servicesuser is present.
- Log in to Prisma Access and select.PanoramaCloud ServicesConfigurationService Setup
- In theService Operationsarea, selectActivate Enterprise DLP or Request a Trial.If you have purchased an add-on Enterprise DLP license, when you click the link the Enterprise DLP capabilities are ready for use. Please disregard the auth code in the welcome email you received with your purchase. The auth code in the email is automatically processed for you.A page displays indicating that your existing data filtering settings will be removed after your DLP on Prisma Access request is approved.After you register and active DLP on Prisma Access, the Cloud Services plugin enables DLP-specific features in the following areas in Panorama.If you have any existing data patterns, they will be removed when you register and activate the DLP on Prisma Access.
- —Allows you to specify global settings for data filtering based on latency, file size, and logging for files that are not scanned.DeviceData Filtering Settings
- —Specifies patterns that you use with the data filtering profile.ObjectsCustom ObjectsData Patterns
- —Adds a data pattern to a data filtering profile and specify additional parameters to send an alert or block action for files that match the patterns you specify.ObjectsSecurity ProfilesData Filtering
- —Adds a customizable page that displays to users when Prisma Access blocks a file using a DLP-based security policy.DeviceResponse PagesData Filtering Block Page
- For a trial, selectYesto request DLP on Prisma Access.A page displays indicating that your request was received and is being evaluated. Do not open a case during this evaluation period.
- Wait 24-48 hours; then selectand reselectPanoramaCloud ServicesConfigurationService SetupActivate Enterprise DLP or Request a Trialto see the results of your request.
- If the DLP on Prisma Access request was approved, a pop-up window displays indicating that Enterprise DLP has been activated and the Panorama appliance displays a banner indicating that DLP configuration has changed and a push is required. If you see this page and banner,CommitandPushyour changes, then enable DLP on Prisma Access.
- If you receive a page that indicates that your request was received and is being evaluated, either your request is still being processed or it wasn’t approved; you can retry the request in 24 hours to see its status. Do not open a case when this request is being evaluated.
- If you receive a message thatEnterprise DLP activation was unsuccessful, the request is approved, but Prisma Access has not yet provisioned the infrastructure. If you see this message, open a support case on the Customer Service Portal (CSP).
Recommended For You
Recommended Videos
Recommended videos not found. | https://docs.paloaltonetworks.com/prisma/prisma-access/preferred/2-0/prisma-access-panorama-admin/data-loss-prevention-on-prisma-access/register-and-activate-dlp-on-prisma-access.html | 2022-01-16T21:18:05 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.paloaltonetworks.com |
Interface patterns give you an opportunity to explore different interface designs. Be sure to check out How to Adapt a Pattern for Your Application.
Use this pattern to keep grids with rows containing varying text lengths looking clean and uniform. This design also improves readability and the user experience by limiting the amount of text in the interface. grids, more-less link pattern onto your interface, 78 lines of expressions will be added to the section where you dragged it.
The local variables at the top of the expression are used to define the data that will be displayed in each row of the grid.
This section opens up the grid and the Course grid column. The course column contains a rich text dynamic link for the course name.
This section contains the description column, the more-less link, and the
if() statement that defines the logic needed for the more-less link to work. The more-less is a rich text dynamic link that gives users the options to display the full description.
The
if() statement that defines the more-less link logic spans from lines
34 to
69. The logic is simple; if the "More" link has been clicked, it will show the full description and a rich text "Less" link. If the "More" link has not been clicked, it will only show 200 characters of the description along with a rich text "More" link.
This design allows you to keep grids with rows containing varying text lengths looking clean and uniform.
On This Page | https://docs.appian.com/suite/help/21.2/more-less.html | 2022-01-16T22:21:33 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.appian.com |
:
- Load a single partition
- As an optimization, you may sometimes directly load the partition of data you are interested in. For example,
spark.read.parquet("/data/date=2017-01-01"). This is unnecessary with Delta Lake, since it can quickly scan the list of files to find the list of relevant ones. If you are interested in a single partition, specify it using a
WHEREclause. For example,
spark.read.parquet("/data").where("date = '2017-01-01'").
When you port an existing application to Delta Lake, you should avoid the following operations, which bypass the transaction log:
- Manually modify data
- Delta Lake uses the transaction log to atomically commit changes to the table. Because the log is the source of truth, files that are written out but not added to the transaction log are not read by Spark. Similarly, even if you manually delete a file, a pointer to the file is still present in the transaction log.
- External readers
- The data stored in Delta Lake is encoded as Parquet files. However, accessing these files using an external reader is not safe. | https://docs.delta.io/0.2.0/porting.html | 2022-01-16T21:34:55 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.delta.io |
Machine Learning¶
We have recently added a new tool (currently in beta) called logscan which utilizes machine learning models to detect anomalies in logs generated by Security Onion components.
Warning
Current and future ML components have dependencies that require special consideration to be made in regards to hardware or VM configurations prior to installation. Namely, a CPU/vCPU with AVX support is required, with AVX2 support recommended for better performance.
Listing components¶
To list all available ML components:
sudo so-learn list
Note
Currently logscan is the only ML component available. (Initially unavailable on air gapped installations. See warning below for more info.)
Enabling components¶
To enable an ML component:
sudo so-learn enable <component> # --apply to immediately apply your changes
Disabling components¶
To disable an ML component:
sudo so-learn disable <component> # --apply to immediately apply your changes
Logscan¶
Warning
Logscan will initially be unavailable on air gapped installations, therefore a networked installation is required to make use of the tool during this beta stage.
Logscan is log agnostic, but in its current implementation only scans logs from the built-in auth provider Kratos.
Important Files and Directories¶
- App log:
/opt/so/log/logscan/app.log
- Alerts log:
/opt/so/log/logscan/alerts.log
- Data:
/nsm/logscan/data
Models¶
Logscan uses the following models to detect anomalous login activity on Security Onion Console:
- K1: Searches for high numbers of login attempts from single IPs in a 1 minute window
- K5: Searches for high ratios of login failures from single IPs in a 5 minute window
- K60: Searches for abnormal patterns of login failures from all IPs seen within a 1 hour window | https://docs.securityonion.net/en/2.3/machine-learning.html | 2022-01-16T21:20:20 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.securityonion.net |
etcd server 🔗
Description 🔗
The Splunk Distribution of OpenTelemetry Collector provides this integration as the
etcd monitor via the Smart Agent receiver.
This monitor reports etcd server metrics under the
/metrics path on its client port and optionally on locations given by
--listen-metrics-urls. an etcd monitor entry in your Smart Agent or Collector configuration is required for its use. Use the appropriate form for your agent type.
Smart Agent 🔗
To activate this monitor in the Smart Agent, add the following to your agent configuration. Note that this configuration assumes that the client certificate and key are accessible by the Smart Agent in the specified path:
monitors: # All monitor config goes under this key - type: etcd ... # Additional config
See Smart Agent example configuration for an autogenerated example of a YAML configuration file, with default values where applicable.
Splunk Distribution of OpenTelemetry Collector 🔗
To activate this monitor in the OpenTelemetry Collector, add the following to your agent configuration:
receivers: smartagent/etcd: type: etcd ... # Additional config
To complete the monitor activation, you must also include the
smartagent/etcd receiver item in a
metrics pipeline. To do this, add the receiver item to the
service >
pipelines >
metrics >
receivers section of your configuration file.
See configuration examples for specific use cases that show how the Splunk OpenTelemetry Collector can integrate and complement existing environments.
Example configurations 🔗
The following is an example configuration for this monitor:
monitors: - type: etcd discoveryRule: kubernetes_pod_name =~ "etcd" && target == "pod" port: 2379 useHTTPS: true skipVerify: true sendAllMetrics: true clientCertPath: /var/lib/minikube/certs/etcd/server.crt clientKeyPath: /var/lib/minikube/certs/etcd/server.key extraDimensions: metric_source: etcd
The following table shows the configuration options for this monitor:. | https://docs.splunk.com/observability/gdi/etcd/etcd.html | 2022-01-16T23:19:47 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.splunk.com |
Product Masters
Contents
What is a Product Master
A product master is a logical grouping of products to which multiple products can be allocated.
A contract will be set up per product master, allowing clients to be billed for any of the products allocated to this product master.
Product Master Attributes
Name
The product master name is reflected on invoices generated.
Code
This code forms part of the contract number.
Description
Give this product master a friendly description so that your users can easily identify which type of products are allocated to it. This will not be displayed to your clients.
Product Grouping
Product groupings associate products under a product master and/or product group with each other. What does this allow you to do? When it’s for display purposes only, all products associated by a group will be displayed under the group name on the invoice and a subtotal given for the group’s line items. This is handy for example you would like to group all ‘International calls’ together for VoIP. When it’s not for display purposes only, and associated products have been set up with a billing principle of maximum, average or delta, and multiple records’ parameters match up (client ID, Record ID and GUID) Varibill can safely assume usage fluctuations between products are due to up- and/or downgrades during a billing period. This is handy for example to accurately bill for up-or downgrades of mailboxes.
A Product Master must be set up to make use of product groupings. This can be done by selecting the Product Grouping option on the Product Master Form (Create / Edit).
The options available are:
- All products are associated in Product Master – This means that all products under the product master are by default associated and groupings don’t need to be defined. I.e. no name will be displayed on the invoice with a subtotal for the product group, but the as-is subtotal of the product master will be displayed.
- All products in Product Master are associated by groupings. Product group names need to be defined and all products under the product master must be linked to a product group.
- No products are associated in Product Master (This is the default). This is the current state of affairs for all products and no association assumed for any of these products.
- Some products are associated by groupings. Product group names need to be defined and some, but not all products under the product master can be linked to a product group.
Once you have selected either ‘All products in Product Master are associated by groupings’ or ‘Some products are associated by groupings’ you would need to specify the grouping names and whether it’s for display purposes only.
A unique system generated code will be allocated to each product group.
Once you have created the product groupings for the product master, products allocated to the product master needs to be allocated to a product group by using the product form (create / edit).
Viewing Your Product Masters
The list of Product Masters displayed is a list of all the Product Masters created. Use the icons next to each record to edit or delete the record. Read [more] on the various buttons, icons and options available. | https://docs.varibill.com/Product_Masters | 2022-01-16T23:06:17 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.varibill.com |
RemoveRoleFromDBCluster
Removes the asssociation of an AWS Identity and Access Management (IAM) role from a DB cluster.
For more information on Amazon Aurora DB clusters, see What is Amazon Aurora? in the Amazon Aurora User Guide.
For more information on Multi-AZ DB clusters, see Multi-AZ deployments with two readable standby DB instances in the Amazon RDS User Guide.
The Multi-AZ DB clusters feature is in preview and is subject to change.
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
- DBClusterIdentifier
The name of the DB cluster to disassociate the IAM role from.
Type: String
Required: Yes
- FeatureName
The name of the feature for the DB cluster that the IAM role is to be disassociated from. For information about supported feature names, see DBEngineVersion.
Type: String
Required: No
- RoleArn
The Amazon Resource Name (ARN) of the IAM role to disassociateRoleNotFound
The specified IAM role Amazon Resource Name (ARN) isn't associated with the specified DB cluster.
HTTP Status Code: 404
- InvalidDBClusterStateFault
The requested operation can't be performed while the cluster is in this state.
HTTP Status Code: 400
Examples
Example
This example illustrates one usage of RemoveRoleFromDBCluster.
Sample Request ?Action=RemoveRoleFromDB25Z &X-Amz-SignedHeaders=content-type;host;user-agent;x-amz-content-sha256;x-amz-date &X-Amz-Signature=cd7d5005d56a505b4e2a878c297e6f8a3cc26b19a335ede018ba41f3185c92a2
Sample Response
<RemoveRoleFromDBClusterResponse xmlns=""> <ResponseMetadata> <RequestId>ccfca75a-90bc-11e6-8533-cd6377e421f8</RequestId> </ResponseMetadata> </RemoveRoleFromDBClusterResponse>
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RemoveRoleFromDBCluster.html | 2022-01-16T23:46:04 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.aws.amazon.com |
OpenFOAM
OpenFOAM is a free, open source CFD software package developed by OpenCFD Ltd at ESI Group and distributed by the OpenFOAM Foundation.
OpenFOAM base installs are available to use directly to run OpenFOAM or indirectly to build your own OpenFOAM applications.
To determine the available versions of OpenFOAM installed use
$ vpkg_versions openfoam Available versions in package (* = default version): [/opt/shared/valet/2.0.1/etc/openfoam.vpkg_json] openfoam OpenFOAM is a free, open source CFD software package 2.1 alias to openfoam/2.1.1 2.1.1 Version 2.1.1 with OpenMPI(1.8.2), boost(1.56), cgal(4.4), scotch(6.0), paraview(4.1), and GCC(4.8) compilers 2.2 alias to openfoam/2.2.2 2.2.2 Version 2.2.2 with OpenMPI(1.8.2), boost(1.56), cgal(4.4), scotch(6.0), paraview(4.1), and GCC(4.8) compilers 2.3 alias to openfoam/2.3.0 * 2.3.0 Version 2.3.0 with OpenMPI(1.8.2), boost(1.56), cgal(4.4), scotch(6.0), paraview(4.1), and GCC(4.8) compilers 2.4 alias to openfoam/2.4.0 2.4.0 Version 2.4.0 with OpenMPI(1.8.2), boost(1.56), cgal(4.4), scotch(6.0), paraview(4.1), and GCC(4.8) compilers 4.1 Version 4.1 with OpenMPI(2.0.2), boost(1.55), CGAL(4.8), scotch(6.0), and GCC (4.9) compilers
Each version of OpenFOAM has several “platforms” built:
- double-precision floating point; compiler optimizations
- double-precision floating point; debugging enabled
- single-precision floating point; compiler optimizations
- single-precision floating point; debugging enabled
Use
vpkg_require to select the version of OpenFOAM. The specific platform can be specified by changing the following environment variables' values before sourcing
$FOAM_INST_DIR/etc/bashrc.
WM_PRECISION_OPTION=(DP|SP) WM_COMPILE_OPTION=(Opt|Debug)
If you do not specify these environment variables' the default is double-precision and optimized platform. For example, if you need a single precision and debug platform for OpenFOAM 2.2.2 for the Intel compiler, use the following
WM_PRECISION_OPTION=SP WM_COMPILE_OPTION=Debug vpkg_require openfoam/2.2.2-intel source $FOAM_INS_DIR/etc/bashrc
Tutorials
The OpenFOAM Foundation tutorials can be followed by using
qlogin to work directly from a compute node. You only need to copy the tutorials once. For example, for
traine on Mills, this process might look like this:
[traine@mills ~]$ workgroup [(it_css:traine)@mills ~]$ qlogin Your job 407698 ("QLOGIN") has been submitted waiting for interactive job to be scheduled ... Your interactive job 407698 has been successfully scheduled. Establishing /opt/shared/OpenGridScheduler/local/qlogin_ssh session to host n017 ... Last login: Tue Oct 15 11:31:06 2013 from mills.mills.hpc.udel.edu [traine@n017 ~]$ cd /archive/it_css/traine [traine@n017 traine]$ mkdir OF [traine@n017 traine]$ cd OF [traine@n017 OF]$ vpkg_require openfoam Adding package `openfoam/2.1.1-gcc` to your environment [traine@n017 OF]$ source $FOAM_INST_DIR/etc/bashrc [traine@n017 OF]$ echo $FOAM_RUN /run [traine@n017 OF]$ mkdir -p .$FOAM_RUN [traine@n017 OF]$ cp -r $FOAM_TUTORIALS .$FOAM_RUN
.) before the environment variable
$FOAM_RUNin the above example. The OpenFOAM Foundation documentation doesn't show this. The
.is necessary in order to specify the run directory to be created in the current directory, otherwise you will get an error since
$FOAM_DIRis defined at
/run.
mills.hpc, user
trainein workgroup
it_css). However, same commands are also applicable for cluster
farber
Installing an OpenFOAM application
Installing OpenFOAM requires extensive storage and time if it needs to be done for each application. Instead use the existing base versions of OpenFOAM to install your own OpenFOAM application. | https://docs.hpc.udel.edu/software/openfoam/openfoam | 2022-01-16T21:42:07 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.hpc.udel.edu |
A newer version of this page is available. Switch to the current version.
Messenger Class
Allows you to send messages and register handlers that will process these messages.
Namespace: DevExpress.Mvvm
Assembly: DevExpress.Mvvm.v20.2.dll
Declaration
Remarks
See Messenger to learn more.
Related GitHub Examples
The following code snippets (auto-collected from DevExpress Examples) contain references to the Messenger class.
Note
The algorithm used to collect these code examples remains a work in progress. Accordingly, the links and snippets below may produce inaccurate results. If you encounter an issue with code examples below, please use the feedback form on this page to report the issue.
Implements
See Also
Feedback | https://docs.devexpress.com/CoreLibraries/DevExpress.Mvvm.Messenger?v=20.2 | 2022-01-16T22:48:38 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.devexpress.com |
VolatileDependenciesPart Properties VolatileDependencies
VolatileDependenciesPart Class
DocumentFormat.OpenXml.Packaging Namespace | https://docs.microsoft.com/en-us/previous-versions/office/developer/office-2007/cc562240(v=office.12) | 2022-01-16T23:53:56 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['images/cc562719.pubproperty(en-us,office.12',
'Public property Public property'], dtype=object)
array(['images/cc562719.pubproperty(en-us,office.12',
'Public property Public property'], dtype=object)
array(['images/cc562719.pubproperty(en-us,office.12',
'Public property Public property'], dtype=object)
array(['images/cc562719.pubproperty(en-us,office.12',
'Public property Public property'], dtype=object)
array(['images/cc562719.pubproperty(en-us,office.12',
'Public property Public property'], dtype=object)
array(['images/cc562719.protproperty(en-us,office.12',
'Protected property Protected property'], dtype=object)
array(['images/cc562719.protproperty(en-us,office.12',
'Protected property Protected property'], dtype=object)
array(['images/cc562719.protproperty(en-us,office.12',
'Protected property Protected property'], dtype=object)
array(['images/cc562719.pubproperty(en-us,office.12',
'Public property Public property'], dtype=object)] | docs.microsoft.com |
Managing Credentials
Credential audits are a key part of any pentest. They enable you to identify weak passwords, commonly used passwords, and top base passwords so you can try to use them to compromise additional targets.
During a credentials audit, you collect sensitive data from your targets and store them in your project. From the Pro Console, you can add, delete, and export credential data. The following sections will show you how to manage credential data within a project.
Available Credentials
To see a list of all available options for the creds command, type
creds help into your console.
Credential types
Metasploit allows you to add certain credentials you collect during a pen test to your project. You can add the following credential types:
- User - The username.
- NTLM Hash - The Windows NTLM authentication hash.
- Password - The plain text password.
- Postgres - The MD5 hash of a Postgres database.
- SSH key - The collected SSH key. It must be a file path.
- Hash - A non-replayable hash
- JTR - A John the Ripper hash.
- REALM - A collection of usernames and passwords allowed access to a certain part of a web application.
Add credentials to a project
To add a credential, the enter a command using the following template:
creds <cred-type> <arguments>
For example to add a NTLM hash with a user:
creds add user:admin ntlm:E2FC15074BF7751DD408E6B105741864:A1074A69B1BDE45403AB680504BBDD1
To add a NTLM hash without a user:
creds add ntlm:E2FC15074BF7751DD408E6B105741864:A1074A69B1BDE45403AB680504BBDD1A
Other Credential Commands
creds -h- View all credential options. Includes examples on filtering, deleting, and adding credentials.
creds- Returns a list of all credentials in a project.
creds -p <filter>- Filter by passwords that match this text.
creds -t ntlm- Returns all creds matching NTLM. | https://docs.rapid7.com/metasploit/managing-credentials-console/ | 2022-01-16T21:30:55 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.rapid7.com |
Authenticate playbooks to Microsoft Sentinel
Note
Azure Sentinel is now called Microsoft Sentinel, and we’ll be updating these pages in the coming weeks. Learn more about recent Microsoft security enhancements.
The way Logic Apps works, it has to connect separately and authenticate independently to every resource of every type that it interacts with, including to Microsoft Sentinel itself. Logic Apps uses specialized connectors for this purpose, with each resource type having its own connector. This document explains the types of connection and authentication in the Logic Apps Microsoft Sentinel connector, that playbooks can use to interact with Microsoft Sentinel in order to have access to the information in your workspace's tables.
This document, along with our guide to using triggers and actions in playbooks, is a companion to our other playbook documentation - Tutorial: Use playbooks with automation rules in Microsoft Sentinel.
For an introduction to playbooks, see Automate threat response with playbooks in Microsoft Sentinel.
For the complete specification of the Microsoft Sentinel connector, see the Logic Apps connector documentation.
Authentication
The Microsoft Sentinel connector in Logic Apps, and its component triggers and actions, can operate on behalf of any identity that has the necessary permissions (read and/or write) on the relevant workspace. The connector supports multiple identity types:
Permissions required
Learn more about permissions in Microsoft Sentinel.
Authenticate with managed identity
This authentication method allows you to give permissions directly to the playbook (a Logic App workflow resource), so that Microsoft Sentinel connector actions taken by the playbook will operate on the playbook's behalf, as if it were an independent object with its own permissions on Microsoft Sentinel. Using this method lowers the number of identities you have to manage.
Note
To give a managed identity access to other resources (like your Microsoft Sentinel workspace), your signed-in user must have a role with permissions to write role assignments, such as Owner or User Access Administrator of the Microsoft Sentinel workspace.
To authenticate with managed identity:
Enable managed identity on the Logic Apps workflow resource. To summarize:
On the logic app menu, under Settings, select Identity. Select System assigned > On > Save. When Azure prompts you to confirm, select Yes.
Your logic app can now use the system-assigned identity, which is registered with Azure AD and is represented by an object ID.
Give that identity access to the Microsoft Sentinel workspace:
From the Microsoft Sentinel menu, select Settings.
Select the Workspace settings tab. From the workspace menu, select Access control (IAM).
From the button bar at the top, select Add and choose Add role assignment. If the Add role assignment option is disabled, you don't have permissions to assign roles.
In the new panel that appears, assign the appropriate role:
Learn more about the available roles in Microsoft Sentinel.
Under Assign access to, choose Logic App.
Choose the subscription the playbook belongs to, and select the playbook name.
Select Save.
Enable the managed identity authentication method in the Microsoft Sentinel Logic Apps connector:
In the Logic Apps designer, add a Microsoft Sentinel Logic Apps connector step. If the connector is already enabled for an existing connection, click the Change connection link.
In the resulting list of connections, select Add new at the bottom.
Create a new connection by selecting Connect with managed identity (preview).
Fill in a name for this connection, select System-assigned managed identity and select Create.
Authenticate as an Azure AD user
To make a connection, select Sign in. You will be prompted to provide your account information. Once you have done so, follow the remaining instructions on the screen to create a connection.
Authenticate as a service principal (Azure AD application)
Service principals can be created by registering an Azure AD application. It is preferable to use a registered application as the connector's identity, instead of using a user account, as you will be better able to control permissions, manage credentials, and enable certain limitations on the use of the connector.
To use your own application with the Microsoft Sentinel connector, perform the following steps:
Register the application with Azure AD and create a service principal. Learn how.
Get credentials (for future authentication).
In the registered application blade, get the application credentials for signing in:
- Client ID: under Overview
- Client secret: under Certificates & secrets.
Grant permissions to the Microsoft Sentinel workspace.
In this step, the app will get permission to work with Microsoft Sentinel workspace.
In the Microsoft Sentinel workspace, go to Settings -> Workspace Settings -> Access control (IAM)
Select Add role assignment.
Select the role you wish to assign to the application. For example, to allow the application to perform actions that will make changes in the Sentinel workspace, like updating an incident, select the Microsoft Sentinel Contributor role. For actions which only read data, the Microsoft Sentinel Reader role is sufficient. Learn more about the available roles in Microsoft Sentinel.
Find the required application and save. By default, Azure AD applications aren't displayed in the available options. To find your application, search for the name and select it.
Authenticate
In this step we use the app credentials to authenticate to the Sentinel connector in Logic Apps.
Select Connect with Service Principal.
Fill in the required parameters (can be found in the registered application blade)
- Tenant: under Overview
- Client ID: under Overview
- Client Secret: under Certificates & secrets
Manage your API connections
Every time an authentication is created for the first time, a new Azure resource of type API Connection is created. The same API connection can be used in all the Microsoft Sentinel actions and triggers in the same Resource Group.
All the API connections can be found in the API connections blade (search for API connections in the Azure portal).
You can also find them by going to the Resources blade and filtering the display by type API Connection. This way allows you to select multiple connections for bulk operations.
In order to change the authorization of an existing connection, enter the connection resource, and select Edit API connection.
Next steps
In this article, you learned about the different methods of authenticating a Logic Apps-based playbook to Microsoft Sentinel.
- Learn more about how to use triggers and actions in playbooks. | https://docs.microsoft.com/en-us/azure/sentinel/authenticate-playbooks-to-sentinel | 2022-01-16T23:33:14 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.microsoft.com |
Table of Contents
Under its minimalist appearance, rekonq packs a full set of powerful features. Some of them are described below:
rekonq is designed with the aim of being a KDE browser. And it shows this.
It obeys your themes, fonts, window decoration, menu highlighting, and many personalization options you set for your desktop.
rekonq opens a PDF file in an Okular kpart).
rekonq shares bookmarks etc. with Konqueror).
rekonq + Akregator
rekonq + KGet | https://docs.kde.org/trunk4/en/extragear-network/rekonq/features.html | 2017-07-20T14:49:58 | CC-MAIN-2017-30 | 1500549423222.65 | [array(['/trunk4/common/top-kde.jpg', None], dtype=object)
array(['Rekonq-okularkpart.png',
'rekonq opens a PDF file in an Okular kpart'], dtype=object)
array(['Bookmarkseditor.png',
'rekonq shares bookmarks etc. with Konqueror'], dtype=object)
array(['Rekonq-akregator.png', 'rekonq + Akregator'], dtype=object)
array(['Rekonq-kget.png', 'rekonq + KGet'], dtype=object)] | docs.kde.org |
The goal navigator guides you through high-level goals that you might want to accomplish in vRealize Automation.
The goals you can achieve depend on your role. To complete each goal, you must complete a sequence of steps that are presented on separate pages in the vRealize Automation console.
The goal navigator can answer the following questions:
Where do I start?
What are all the steps I need to complete to achieve a goal?
What are the prerequisites for completing a particular task?
Why do I need to do this step and how does this step help me achieve my goal?
The goal navigator is hidden by default. You can expand the goal navigator by clicking the icon on the left side of the screen.
After you select a goal, you navigate between the pages needed to accomplish the goal by clicking each step. The goal navigator does not validate that you completed a step, or force you to complete steps in a particular order. The steps are listed in the recommended sequence. You can return to each goal as many times as needed.
For each step, the goal navigator provides a description of the task you need to perform on the corresponding page. The goal navigator does not provide detailed information such as how to complete the forms on a page. You can hide the page information or move it to a more convenient position on the page. If you hide the page information, you can display it again by clicking the information icon on the goal navigator panel. | https://docs.vmware.com/en/vRealize-Automation/7.2/com.vmware.vra.concepts.doc/GUID-4EBA07FB-0353-4C16-90C5-551AEACE32CE.html | 2017-07-20T14:54:27 | CC-MAIN-2017-30 | 1500549423222.65 | [] | docs.vmware.com |
Facilitates the keying of grid parameters for a spatial index.
||
|-|
|Applies to: SQL Server ( SQL Server 2012 through current version).|
Syntax
sp_help_spatial_geography_histogram [ @tabname =] 'tabname' [ , [ @colname = ] 'columnname' ] [ , [ @resolution = ] 'resolution' ] [ , [ @sample = ] 'tablesample' ]
Arguments
[ @tabname =] 'tabname'.
[ @colname = ] 'columnname'
Is the name of the spatial column specified. columnname is a sysname, with no default.
[ @resolution = ] 'resolution'
Is the resolution of the bounding box. Valid values are from 10 to 5000. resolution is a tinyint, with no default.
[ @sample = ] 'sample'
Is the percentage of the table that is used. Valid values are from 0 to 100. tablesample is a float. Default value is 100.
Property Value/Return Value
A table value is returned. The following grid describes the column contents of the table.
Permissions
User must be a member of the public role. Requires READ ACCESS permission on the server and the object.
Remarks
SSMS spatial tab shows a graphical representation of the results. You can query the results against the spatial window to get an approximate number of result items.
Note
Objects in the table may cover more than one cell, so the sum of the cells in the table may be larger than the number of actual objects.
The bounding box for the geography type is the entire globe.
Examples
The following example calls sp_help_spatial_geography_histogram on the
Person.Address table in the AdventureWorks2012 database.
EXEC sp_help_spatial_geography_histogram @tabname = Person.Address, @colname = SpatialLocation, @resolution = 64, @sample = 30;
See Also
Spatial Index Stored Procedures (Transact-SQL) | https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-help-spatial-geography-histogram-transact-sql | 2017-07-20T15:10:24 | CC-MAIN-2017-30 | 1500549423222.65 | [array(['../../includes/media/yes.png', 'yes'], dtype=object)
array(['../../includes/media/no.png', 'no'], dtype=object)
array(['../../includes/media/no.png', 'no'], dtype=object)
array(['../../includes/media/no.png', 'no'], dtype=object)] | docs.microsoft.com |
Different selection modes are supported in RadGrid. You can choose between cell and row, single and multiple selection. Same as other features, this one is easily
set, using a single property—selectable.
Below you can see the values accepted by the selectable properties and their meaning:
Value
Meaning
none
Selection is disabled.
cell
A single cell can be selected in the grid.
row
A single row can be selected in the grid.
multiple, cell
Selection of multiple cells is allowed. On touch devices it is performed by holding the grip (highlighted angle
of the first selected cell) and dragging. On devices using mouse and keyboard, the same approach is supported using the mouse; or instead of holding
the grip, you can press and hold the Ctrl key.
multiple, row
Selection of multiple rows is allowed. On touch devices it is performed by holding one of the grips (highlighted
angles of the first selected row) and dragging. On devices using mouse and keyboard, the same approach is supported using the mouse; or instead of holding
the grip, you can press and hold the Ctrl key.
Note that as of Q2 2013, when a cell or row is put in edit mode, selection is automatically disabled and all selected items are cleared. Once the
cell editor is closed selection is turned back on without restoring previously selected items.
To see how the selection modes work, you can run the following code:
<div id="selectionOptions">
<input id="radio1" type="radio" value="cell" name="selection" checked="checked" /><label for="radio1">cell</label>
<input id="radio2" type="radio" value="row" name="selection" /><label for="radio2">row</label>
<input id="radio3" type="radio" value="multiple, cell" name="selection" /><label for="radio3">multiple, cell</label>
<input id="radio4" type="radio" value="multiple, row" name="selection" /><label for="radio4">multiple, row</label>
</div>
<div id="grid1" data-win-control="Telerik.UI.RadGrid" data-win-options="{
dataSource: {
data: [
{Name: 'Lily', ID: 1},
{Name: 'Kate', ID: 2},
{Name: 'Dan', ID: 3},
{Name: 'Mark', ID: 4}
],
},
selectable: 'cell'
}">
</div>
var radioButtons = selectionOptions.getElementsByTagName("input");
for (var i = 0; i < radioButtons.length; i++) {
radioButtons[i].addEventListener("click", function (e) {
var mode = e.target.value;
var grid = grid1.winControl;
grid.selectable = mode;
grid.refresh();
});
}
The result will look like this: | http://docs.telerik.com/help/windows-8-html/grid-selection-configuration.html | 2017-07-20T15:20:42 | CC-MAIN-2017-30 | 1500549423222.65 | [] | docs.telerik.com |
Sessions
Maintaining.
Use the following features to optimize the reliability of sessions, reduce inconvenience, downtime, and loss of productivity; using these features, mobile users can roam quickly and easily between devices.
The Logon interval section describes how to change the default setting.
You can also log a user off of a session, disconnect a session, and configure session prelaunch and linger; see the Manage Delivery Groups article.
Session reliabilitySession reliability:
-.
Auto Client ReconnectAuto Client Reconnect. Enables or disables automatic reconnection by Citrix Receiver Receiver.
- Receiver or the plug-in submits incorrect authentication information, which might occur during an attack or the server determines that too much time has elapsed since it detected the broken connection.
ICA Keep-AliveICA Keep-Alive:..
Session roamingSession roaming.
-Configure session roaming:
-sEffects from other settings.
Connection leasing and session roamingConnection leasing and session roaming.
Logon intervalLogon interval. | https://docs.citrix.com/en-us/xenapp-and-xendesktop/7-15-ltsr/manage-deployment/sessions.html | 2019-05-19T15:40:40 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.citrix.com |
You can use Google Forms with Ghost as a free, embedded contact form to collect information from readers.
Whether you just want to give people a way to contact you, or you're running a full blown survey which you need to collect data for - Google Forms is a great way to support contact forms in Ghost. Getting a working form on your site should take just a few minutes once you've finished making your form.
To get started, head over to Google Forms and make a new form.
Create a new form and send
Once you've customised the form to work how you'd like, click on the send button.
Get the embed code from the send window
From within the send window, select the Send via
< > tab, and copy the code
Within the Ghost editor, add an HTML card
You can paste the form code within an HTML card on any page to embed your form
>: | https://docs.ghost.org/integrations/google-forms/ | 2019-05-19T14:51:03 | CC-MAIN-2019-22 | 1558232254889.43 | [array(['https://docs.ghost.io/content/images/2018/09/image-1.png', None],
dtype=object)
array(['https://docs.ghost.io/content/images/2018/09/image-2.png', None],
dtype=object)
array(['https://docs.ghost.io/content/images/2018/09/image-3.png', None],
dtype=object) ] | docs.ghost.org |
Protection features in Azure Information Protection rolling out to existing Office 365 tenants
To help with the initial step in protecting your information, starting July 2018 all Azure Information Protection eligible tenants will have the protection features in Azure Information Protection turned on by default. The protection features in Azure Information Protection were formerly known in Office 365 as Rights Management or Azure RMS. If your organization has an Office E3 service plan or a higher service plan you will now get a head start protecting information through Azure Information Protection when we roll out these features.
Changes beginning July 1, 2018
Starting July 1, 2018, Microsoft will enable the protection capability in Azure Information Protection for all Office 365 tenants who have one of the following subscription plans:
Office 365 Message Encryption is offered as part of Office 365 E3 and E5, Microsoft E3 and E5, Office 365 A1, A3, and A5, and Office 365 G3 and G5. You.
Tenant administrators can check the protection status in the Office 365 administrator portal.
Why are we making this change?
Office 365 Message Encryption leverages the protection capabilities in Azure Information Protection. At the heart of the recent improvements to Office 365 Message Encryption and our broader investments to information protection in Microsoft 365, we are making it easier for organizations to turn on and use our protection capabilities, as historically, encryption technologies have been difficult to set up. By turning on the protection features in Azure Information Protection by default, you can quickly get started to protect your sensitive data.
Does this impact me?
If your Office 365 organization has purchased an eligible Office 365 license, then your tenant will be impacted by this change.
IMPORTANT! If you're using Active Directory Rights Management Services (AD RMS) in your on-premises environment, you must either opt-out of this change immediately or migrate to Azure Information Protection before we roll out this change within the next 30 days. For information on how to opt-out, see "I use AD RMS, how do I opt out?" later in this article. If you prefer to migrate, see Migrating from AD RMS to Azure Information Protection.
Can I use Azure Information Protection with Active Directory Rights Management Services (AD RMS)?
No. This is not a supported deployment scenario. Without taking the additional opt-out steps, some computers might automatically start using the Azure Rights Management service and also connect to your AD RMS cluster. This scenario isn't supported and has unreliable results, so it's important that you opt out of this change within the next 30 days before we roll out these new features. For information on how to opt-out, see "I use AD RMS, how do I opt out?" later in this article. If you prefer to migrate, see Migrating from AD RMS to Azure Information Protection.
How do I know if I'm using AD RMS?
Use these instructions from Preparing the environment for Azure Rights Management when you also have Active Directory Rights Management Services (AD RMS) to check if you have deployed AD RMS:
- Although optional, most AD RMS deployments publish the service connection point (SCP) to Active Directory so that domain computers can discover the AD RMS cluster.
Use ADSI Edit to see whether you have an SCP published in Active Directory: CN=Configuration [server name], CN=Services, CN=RightsManagementServices, CN=SCP
- If you are not using an SCP, Windows computers that connect to an AD RMS cluster must be configured for client-side service discovery or licensing redirection by using the Windows registry: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSIPC\ServiceLocation or HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\MSIPC\ServiceLocation
For more information about these registry configurations, see Enabling client-side service discovery by using the Windows registry and Redirecting licensing server traffic.
I use AD RMS, how do I opt out?
To opt out of the upcoming change, complete these steps:
Using a work or school account that has global administrator permissions in your Office 365 organization, start a Windows PowerShell session and connect to Exchange Online. For instructions, see Connect to Exchange Online PowerShell.
Run the Set-IRMConfiguration cmdlet using the following syntax:
Set-IRMConfiguration -AutomaticServiceUpdateEnabled $false
What can I expect after this change has been made?
Once this is enabled, provided you haven't opted out, you can start using the new version of Office 365 Message Encryption which was announced at Microsoft Ignite 2017 and leverages the encryption and protection capabilities of Azure Information Protection.
For more information about the new enhancements, see Office 365 Message Encryption.
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/office365/securitycompliance/azure-ip-protection-features?redirectSourcePath=%252fen-us%252farticle%252fprotection-features-in-azure-information-protection-rolling-out-to-existing-office-365-tenants-7ad6f58e-65d7-4c82-8e65-0b773666634d | 2019-05-19T15:11:54 | CC-MAIN-2019-22 | 1558232254889.43 | [array(['media/303453c8-e4a5-4875-b49f-e80c3eb7b91e.png',
'Screenshot that shows that rights management in Office 365 is activated.'],
dtype=object)
array(['media/599ca9e7-c05a-429e-ae8d-359f1291a3d8.png',
'Screenshot that shows an OME protected message in Outlook on the web.'],
dtype=object) ] | docs.microsoft.com |
Action disabled: source
Using classes
Renaming a class
- In the Class pane, click on the class name twice.
- Type a new name for the class and press ENTER.
Deleting a class
- In the Class Pane, right click on the class you want to delete.
- Select Delete class.
Note: Before deleting a class, you must delete all the objects from the class. You can delete only an empty class.
Note: What we mean in the actions below refers to hiding/viewing the objects in the class. The class name itself is always visible in the class tree. Also please note that if the helpers are sitting on a particular object belonging to a class that is about to be hidden, that object will NOT get hidden even if other objects of the class would get hidden
Hiding a class
- In the Class Pane, right click on the class that you want to hide.
- Select Hide class.
Viewing only a class
- In the Class pane, right click on the class you want to view.
- Select View only class . All other classes will be hidden.
To select a class and the subclass, select View only class and subclass.
Press F1 inside the application to read context-sensitive help directly in the application itself
← ∈
Last modified: le 2017/10/05 05:52 | http://docs.teamtad.com/doku.php/managing_classes?do=edit | 2019-05-19T15:32:07 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.teamtad.com |
What is the maximum duration of audio I can transcribe?
If you are POSTing an audio file to /v1/transcript, there is no limit on the size or duration of audio you can send to the API.
Using /v1/stream you can transcribe up to 15 seconds of speech for an immediate transcript.
What sample rate should the audio be?
If you are POSTing an audio file to /v1/transcript, there is no required format. You can send any sample rate in any format (mp3, wav, flacc, etc) and we'll turn the audio into text.
Using /v1/stream you must send live audio in 8khz sample rate, mono (single channel), with 16bit little-endian formatting.
Can I perform signal processing on the audio before I send it to the API?
You shouldn't perform any signal processing on the audio you send to the API. Doing so will have a negative impact on the accuracy in most cases. While the processed audio may sound cleaner to you, our neural network will be confused by it.
How many phrases or words can I add to a Corpus?
At this time, you can send a maximum of 5000-7000 phrases/words, depending on the length of each phrase/word. If this is too restrictive for you, please send an email to [email protected].
If I set the "closed_domain" flag to True, will the API be able to recognize phrases/sentences not in the examples file?
Yes. The
closed_domain flag only restricts the vocabulary of the API. For example, if
your phrases include: "hello world", and "hi", the API will still be able to
recognize "hi world".
Troubleshooting
If something went wrong, the response status code will be in the
400 range and you should have an
"error": <message> in your response.
Error messages usually fall into three categories:
- Corpus creation errors
- Connection errors
- Audio conversion errors
What do I do if I get an error?
For any response with
"status": "error" please
refer to
"error": <message>.
Here are some common error messages:
How can I get help or learn more?
Send yourself an invite to join our Slack community.
Or send an email to: [email protected] | https://docs.assemblyai.com/faq/ | 2017-12-11T09:52:20 | CC-MAIN-2017-51 | 1512948513330.14 | [] | docs.assemblyai.com |
DiscoveredResource
Object representing the on-premises resource being migrated.
Contents
- ConfigurationId
The configurationId in ADS that uniquely identifies the on-premise resource.
Type: String
Length Constraints: Minimum length of 1.
Required: Yes
- Description
A description that can be free-form text to record additional detail about the discovered resource for clarity or later reference.
Type: String
Length Constraints: Minimum length of 0. Maximum length of 500.
Required: No
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | http://docs.aws.amazon.com/migrationhub/latest/ug/API_DiscoveredResource.html | 2017-12-11T09:32:58 | CC-MAIN-2017-51 | 1512948513330.14 | [] | docs.aws.amazon.com |
JMail::useSMTP
From Joomla! Documentation
Revision as of 19::useSMTP
Description
Use SMTP for sending the email.
Description:JMail::useSMTP [Edit Descripton]
public function useSMTP ( $auth=null $host=null $user=null $pass=null $secure=null $port=25 )
See also
JMail::useSMTP source code on BitBucket
Class JMail
Subpackage Mail
- Other versions of JMail::useSMTP
SeeAlso:JMail::useSMTP [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=API17:JMail::useSMTP&direction=next&oldid=57320 | 2015-04-18T05:13:59 | CC-MAIN-2015-18 | 1429246633799.48 | [] | docs.joomla.org |
Difference between revisions of "JED Entries Trademark Checklist"
From Joomla! Documentation
Revision as of 14:05, 1 November 2012:. | https://docs.joomla.org/index.php?title=JED_Entries_Trademark_Checklist&diff=77314&oldid=29714 | 2015-04-18T05:59:27 | CC-MAIN-2015-18 | 1429246633799.48 | [] | docs.joomla.org |
Difference between revisions of "JDocumentRendererComponent"Component
Description
JDocumentRendererComponent is responsible for rendering the output from a component. It is called whenever a
<jdoc:include statement is encountered in the document template. [Edit Descripton]
Methods
- Defined in libraries/joomla/document/html/renderer/component.php
- Extends JDocumentRenderer
Importing
jimport( 'joomla.document.html.renderer.component' );
See also
JDocumentRendererComponent source code on BitBucket
Subpackage Document
- Other versions of JDocumentRendererComponent
SeeAlso:JDocumentRendererComponent [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=API17:JDocumentRendererComponent&diff=95623&oldid=55020 | 2015-04-18T06:01:17 | CC-MAIN-2015-18 | 1429246633799.48 | [] | docs.joomla.org |
Difference between revisions of "JPaneSliders:: construct"::__construct
Description
Constructor.
Description:JPaneSliders:: construct [Edit Descripton]
SeeAlso:JPaneSliders:: construct [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=JPaneSliders::_construct/11.1&diff=prev&oldid=57442 | 2015-04-18T05:47:44 | CC-MAIN-2015-18 | 1429246633799.48 | [] | docs.joomla.org |
Information for "What is the purpose of the index.php file?" Basic information Display titleWhat is the purpose of the index.php file? Default sort keyWhat is the purpose of the index.php file? Page length (in bytes)2,638 Page ID1580Confudler (Talk | contribs) Date of page creation09:58, 2 May 2008 Latest editorMATsxm (Talk | contribs) Date of latest edit19:24, 30 January 2015 Total number of edits6 Total number of distinct authors2 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ‘’ | https://docs.joomla.org/index.php?title=What_is_the_purpose_of_the_index.php_file%3F&action=info | 2015-04-18T05:00:09 | CC-MAIN-2015-18 | 1429246633799.48 | [] | docs.joomla.org |
Sauce Connect is a secure tunneling app which allows you to execute tests securely when testing behind firewalls via a secure connection between Sauce Labs’ client cloud and your environment.
When Should I Use Sauce Connect?
You should use Sauce Connect whenever you’re testing an app behind a firewall. Sauce Connect is not required to execute scripts on Sauce.
You can also use Sauce Connect:
- as an alternative to whitelisting
- as a means of filtering traffic in your records (e.g. for Google Analytics)
- as a means of monitoring network traffic
- as a way to stabilize network connections (detecting/re-sending dropped packets)
Basic Setup
- Get the latest Sauce Connect:
- Download Sauce Connect v4.3.8 for OS X
SHA1 checksum: 9d9ecb49fbea70186d08aac7de6032546d856379
- Download Sauce Connect v4.3.8 for Windows
SHA1 checksum: d3ad39466b221e7b779e0e69355a22b876a9b5c5
- Download Sauce Connect v4.3.8 for Linux
SHA1 checksum: 0ae5960a9b4b33e5a8e8cad9ec4b610b68eb3520
- Download Sauce Connect v4.3.8 for Linux 32-bit
SHA1 checksum: b9724f63b727f3c49e7367970b6ea5e4a7bb697d
- Open outbound port 443 (or configure Sauce Connect with a proxy that can reach saucelabs.com, using the
--proxyor
--paccommand line options).
- After extracting, go to the install directory and run:
bin/sc -u sauceUsername -k sauceAccessKey
When you see "connected", you are ready to go! We also recommend reading our take on security best practices.
About Sauce Connect
System Requirements
System requirements vary depending on the number of parallel tests you plan to run. Here are some samples based on simultaneous test volume:
For increased reliability and security, use a dedicated server. You may need to up your open file limit if your parallel test count is high (
ulimit -n 8192).
Note: Sauce Connect 4 will not work on Mac OS X versions less than 10.8
How is Sauce Connect Secured?
Though starting up a tunnel using Sauce Connect may take a few seconds, our tunneling method allows for the highest possible security. We spin up a secure tunnel sandbox environment for each tunnel connection in order to provide greater tunnel security and isolation from other customers.
Data transmitted by Sauce Connect is encrypted through industry-standard TLS, using the AES-256 cipher. Sauce Connect also uses a caching web proxy to minimize data transfer (see the command-line option
-B, --no-ssl-bump-domains to disable this).
Within your infrastructure, Sauce Connect needs access to the application under test, but can be firewalled from the rest of your internal network. We recommend running Sauce Connect in a firewall DMZ, on a dedicated machine, and setting up firewall rules to restrict access from that DMZ to your internal network.
To read more about security on Sauce, read our security white paper.
Setup Process
During startup, Sauce Connect issues a series of HTTPS requests to the Sauce Labs REST API. These are outbound connections to saucelabs.com on port 443. Using the REST API, Sauce Connect checks for updates and other running Sauce Connect sessions, and ultimately launches a remote tunnel endpoint VM. Once the VM is started, a tunnel connection is established to a makiXXXXX.miso.saucelabs.com address on port 443, and all traffic between Sauce Labs and Sauce Connect is then multiplexed over this single encrypted TLS connection.
- Sauce Connect makes HTTPS REST API calls to saucelabs.com:443 using the username and access key provided when starting Sauce Connect.
- Sauce Labs creates a dedicated virtual machine which will serve as the endpoint of the tunnel connection created by Sauce Connect.
- Sauce Labs responds with the unique ID of the virtual machine created in step 2.
- Sauce Connect establishes a TLS connection directly to the dedicated virtual machine created in step 2. (makiXXXXX.miso.saucelabs.com).
- All test traffic is multiplexed over the tunnel connection established in step 4.
Teardown Process
Once Sauce Connect is terminated (typically via ctrl-c), a call will be made from Sauce Connect to the REST API with instructions to terminate the tunnel VM. Sauce Connect will continue to poll the REST API until the tunnel VM has been halted and deleted.
Advanced Configuration
Command-line Options
The
sc command line program accepts the following parameters:
Usage: ./sc -u, --user <username> The environment variable SAUCE_USERNAME can also be used. -k, --api-key <api-key> The environment variable SAUCE_ACCESS_KEY can also be used. -B, --no-ssl-bump-domains Comma-separated list of domains. Requests whose host matches one of these will not be SSL re-encrypted. -D, --direct-domains <...> Comma-separated list of domains. Requests whose host matches one of these will be relayed directly through the internet, instead of through the tunnel. -t, --tunnel-domains <...> Inverse of '--direct-domains'. Only requests for domains in this list will be sent through the tunnel. Overrides '--direct-domains'. -v, --verbose Enable verbose debugging. -F, --fast-fail-regexps Comma-separated list of regular expressions. Requests with URLs matching one of these will get dropped instantly and will not go through the tunnel. -i, --tunnel-identifier <id> Assign <id> to this Sauce Connect instance. Future jobs will use this tunnel only when explicitly specified by the 'tunnel- identifier' desired capability in a Selenium client. -l, --logfile <file> -P, --se-port <port> Port on which Sauce Connect's Selenium relay will listen for requests. Selenium commands reaching Sauce Connect on this port will be relayed to Sauce Labs securely and reliably through Sauce Connect's tunnel. Defaults to 4445. -p, --proxy <host:port> Proxy host and port that Sauce Connect should use to connect to the Sauce Labs cloud. -w, --proxy-userpwd <user:pwd> Username and password required to access the proxy configured with -p. --pac <url> Proxy autoconfiguration. Can be a http(s) or local file:// URL. -T, --proxy-tunnel Use the proxy configured with -p for the tunnel connection. -s, --shared-tunnel Let sub-accounts of the tunnel owner use the tunnel if requested. -x, --rest-url <arg> Advanced feature: Connect to Sauce REST API at alternative URL. Use only if directed to do so by Sauce Labs support. -f, --readyfile File that will be touched to signal when tunnel is ready. -a, --auth <host:port:user:pwd> Perform basic authentication when a URL on <host:port> asks for a username and password. This option can be used multiple times. -z, --log-stats <seconds> Log statistics about HTTP traffic every <seconds>. Information includes bytes transmitted, requests made, and responses received. --max-logsize <bytes> Rotate logfile after reaching <bytes> size. Disabled by default. --doctor Perform checks to detect possible misconfiguration or problems. --no-autodetect Disable the autodetection of proxy settings. -h, --help Display this help text.
Proxy Configuration
Automatic
As of Sauce Connect 4.3.1, proxies and PAC settings will be autoconfigured based on the running system's settings.
On Windows, Internet Explorer proxy settings will be checked as well as system-wide proxy settings set via Control Panel.
On Mac OS X, Sauce Connect will use the proxy set in Preferences / Network. We support both the proxy and the PAC settings.
On Linux, Sauce Connect looks for the following variables, in order:
http_proxy,
HTTP_PROXY,
all_proxy, and
ALL_PROXY. They can be in the form or just
host.name:port.
Proxy detection can be disabled via the command line option
--no-autodetect.
Manual
If auto-configuration fails, or the settings need to be overridden, there are a few command-line options that can be used to configure proxies manually:
-p, --proxy <host:port>,
-w, --proxy-userpwd <user:pwd>,
-T, --proxy-tunnel, and
--pac <url>.
Managing Multiple Tunnels
In its default mode of execution, one Sauce Connect instance will suffice all your needs and will require no efforts to make cloud browsers driven by your tests navigate through the tunnel.
After starting Sauce Connect, all traffic from jobs under the same account will use the tunnel automatically and transparently. After the tunnel is stopped, jobs will simply attempt to find your servers through the open internet.
Using Tunnel Identifiers
If you still believe you need multiple tunnels, you will need tunnel identifiers.
Using identified tunnels, you can start multiple instances of Sauce Connect that will not collide with each other and will not make your tests' traffic automatically tunnel through. This allows you to test different localhost servers or access different networks from different tests (a common requirement when running tests on TravisCI.)
To use this feature, simply start Sauce Connect using the
--tunnel-identifier flag (or
-i) and provide your own unique identifier string. Once the tunnel is up and running, any tests that you want going through this tunnel will need to provide the correct identifier using the
tunnelIdentifier desired capability.
On the Same Machine
Please note that in order to run multiple Sauce Connect instances on the same machine, it's necessary to provide additional flags to configure a different Selenium port for each instance. Sauce Connect will use a pidfile and logfile that have the tunnel identifier in their names. Here's an example of how to start a second Sauce Connect instance with a tunnel identifier:
sc --se-port 4446 -i my-tun2
Service Management
Sauce Connect can be monitored more easily using a Service Managment tool like systemd or upstart. These tools help to make the usage of Sauce Connect more fluid and allow for time to wait for Sauce Connect to clean up upon exiting. It's common to want to signal kill the Sauce Connect process and start one instantly after that. This will cause issues as it takes time to shutdown Sauce Connect remotely. These tools help account for that so you don't have to.
Systemd
- cd /usr/local/bin
- wget
- tar -zxvf sc-4.3-linux.tar.gz
- cp sc-4.3-linux/bin/sc .
ls /usr/local/bin/sc —- verify Sauce Connect is in correct location
cd /etc/systemd/system
- create a file 'sc.server' and copy/paste the contents below
- modify username and access key in sc.server to match your own
- sudo systemctl daemon-reload
- sudo systemctl start sc.service
- sudo systemstl status sc.service
- sudo systemstl stop sc.service
[Unit] Description=Sauce Connect After=network.target [Service] Type=simple User=nobody Group=nogroup ExecStart=/usr/local/bin/sc -u <CHANGEME> -k <CHANGEME> -l /tmp/sc_long.log --pidfile /tmp/sc_long.pid --se-port 0 [Install] WantedBy=multi-user.target
Upstart
- cd /usr/local/bin
- wget
- tar -zxvf sc-4.3.8-linux.tar.gz
- cp sc-4.3.8-linux/bin/sc .
ls /usr/local/bin/sc —- verify Sauce Connect is in correct location
cd /etc/init
- create a file 'sc.conf' and copy/paste the contents below
- modify username and access key in sc.conf to match your own
- sudo initctl reload-configuration
- sudo start sc
- sudo status sc
- sudo stop sc
# #This Upstart config expects that Sauce Connect is installed at #/usr/local/bin/sc. Edit that path if it is installed somewhere else. # #Copy this file to /etc/init/sc.conf, and do: # # $ sudo initctl reload-configuration # #Then you can manage SC via the usual upstart tools, e.g.: # #$ sudo start sc #$ sudo restart sc #$ sudo stop sc #$ sudo status sc # start on filesystem and started networking stop on runlevel 06 respawn respawn limit 15 5 #Wait for tunnel shutdown when stopping Sauce Connect. kill timeout 120 #Bump maximum number of open files/sockets. limit nofile 8192 8192 #Make Sauce Connect output go to /var/log/upstart/sc.log. console log env LOGFILE="/tmp/sc_long.log" env PIDFILE="/tmp/sc_long.pid" env EXTRA_ARGS="--se-port 0" env SAUCE_USERNAME="CHANGEME" # XXX env SAUCE_ACCESS_KEY="CHANGEME" # XXX post-start script # Save the pidfile, since Sauce Connect might remove it before the # post-stop script gets a chance to run. n=0 max_tries=30 while [ $n -le $max_tries ]; do if [ -f $PIDFILE ]; then cp $PIDFILE ${PIDFILE}.saved break fi n=$((n+1)) [ $n -ge $max_tries ] && exit 1 sleep 1 done end script post-stop script # Wait for Sauce Connect to shut down its tunnel. n=0 max_tries=30 pid="$(cat ${PIDFILE}.saved)" while [ $n -le $max_tries ]; do kill -0 $pid || break n=$((n+1)) [ $n -ge $max_tries ] && exit 1 sleep 1 done end script setuid nobody setgid nogroup chdir /tmp exec /usr/local/bin/sc -l $LOGFILE --pidfile $PIDFILE $EXTRA_ARGS
FAQs
Can I reuse a tunnel between multiple accounts?
Tunnels started by an account can be reused by its sub-accounts. To reuse a tunnel, start Sauce Connect with the special --shared-tunnel parameter from the main account in your account tree. For example:
sc -u USERNAME -k ACCESS_KEY --shared-tunnel
Once the tunnel is running, provide the special "parentTunnel" desired capability on a per-job basis. The value of this capability should be the username of the parent account that owns the shared Sauce Connect tunnel as a string. Here's an example (this test should can run using Auth credentials for any sub-account of "parentAccount"):
capabilities['parentTunnel'] = "parentAccount"
That's it! We'll take care of the rest by making the jobs that request this capability route all their traffic through the tunnel created using your parent account (parentAccount, following our example).
What firewall rules do I need?
Sauce Connect needs to make outbound connections to saucelabs.com and *.miso.saucelabs.com on port 443 for the REST API and the primary tunnel connection to the Sauce cloud. It can also optionally make these connections through a web proxy; see the
--proxy,
--pac, and
--proxy-tunnel command line options.
I have verbose logging on, but I'm not seeing anything in stdout. What gives?
Output from the
-v flag is sent to the Sauce Connect log file rather than stdout.
How can I periodically restart Sauce Connect?
Sauce Connect handles a lot of traffic for heavy testers. Here is one way to keep it 'fresh' to avoid leakages and freezes. First write a loop that will restart Sauce Connect every time it gets killed or crashes:
while :; do killall sc; sleep 30; sc -u $SAUCE_USERNAME -k $SAUCE_ACCESS_KEY; done
Then, write a cron task that will kill Sauce Connect on a regular basis:
crontab -e 00 00 * * * killall sc
This will kill Sauce Connect every day at 12am, but can be modified to behave differently depending on your requirements.
How can I use Sauce Connect to test graceful degredation?
The
--F, --fast-fail-regexps command line option can be used to drop requests that fit a description altogether. You could use it to simulate non-loading of scripts, styles, or other resources.
Can I access applications on localhost?
When using Sauce Connect, local web apps running on commonly-used ports are available to test at localhost URLs, just as if the Sauce Labs cloud were your local machine. Easy!
However, because an additional proxy is required for localhost URLs, tests may perform better when using a locally-defined domain name (which can be set in your hosts file)) rather than localhost. Using a locally-defined domain name also allows access to applications on any port.
Please Note: On Android devices ports 5555 and 8080 cannot be used with Sauce Connect.
How can I improve performance?
There are a few Sauce Connect specific command line options that can help. These include
-D, --direct-domains,
-t, --tunnel-domains, and
-F, --fast-fail-regexps. These allow for careful curating of which traffic will go through the tunnel and which will go directly to the internet.
Only route needed requests through Sauce Connect
Whitelist Domains
A common use case for this is for users who only need their requests to their internal environment to go through Sauce Connect, with external resources being pulled as usual.
To do this, we could add the following flag:
-t internal-site.com
Blacklist Domains
Let's say instead that we need most things to go through the tunnel, but certain external assets to be retrieved directly (for instance, with a CDN).
For this, we could add the following flag:
-D cdn.external-site.com
Drop Analytics and Ad-based Requests
Some external assets we might not need to access at all, for the sake of speed or just not interfering with user metrics, such as analytics:
-F analytics-provider.com
Why is there a
-p, --proxy and a
-T, --proxy-tunnel option?
Fundamentally, Sauce Connect makes two separate outbound connections for two separate purposes. The first, that
-p, --proxy <host:port> uses, is a lightweight connection to our REST API that simply tells our servers basic information about when Sauce Connect's status (e.g. starting up, ready, stopping).
The second connection is to the actual tunnel VM created for your Sauce Connect instance. Enabling the
-T, --proxy-tunnel flag will cause same proxy specified with
-p, --proxy to be used for this connection as well. We recommend avoiding using a proxy for this connection, since it is already TLS secured and a good deal of data tends to go over this connection. Adding another step in the middle can hinder test performance.
So ideally you only need
-p, --proxy <host:port> (and perhaps
-w, --proxy-userpwd <user:pwd> for credentials), but
-T, --proxy-tunnel is available if your network doesn't allow outgoing connections on port
443. If your tests are slow, you may want to ask your network administrator about making an exception for this connection.
If we have 5 users, can we use 5 instances of Sauce Connect, or do we have to set up one shared instance?
Feel free to use either, even if you only have one Sauce account! If you do decide to use 5 separate instances, you will need to create unique identifiers for each. You can also create sub-accounts that each have their own individual access keys.
Troubleshooting Sauce Connect
Logging
By default, Sauce Connect generates log messages to your local operating system's temporary folder. On Linux / Mac OS X this is usually
/tmp; on Windows, it varies by individual release. You can also specify a specific location for the output using the
-l command line option.
You can enable verbose logging with the
-v flag. Verbose output will be sent to the Sauce Connect log file rather than standard out.
Connectivity Considerations
- Is there a firewall in place between the machine running Sauce Connect and Sauce Labs (*.saucelabs.com:443)? You may need to allow access in your firewall rules, or configure Sauce Connect to use a proxy. Sauce Connect needs to establish outbound connections to saucelabs.com (162.222.73.28) on port 443, and to one of many hosts makiXXXXX.miso.saucelabs.com IPs (162.222.72.0/21), also on port 443. It can make these connections directly, or can be configured to use an HTTP proxy with the
--proxy,
--pacand
--proxy-tunnelcommand line options.
- Is a proxy server required to connect to route traffic from saucelabs.com to an internal site? If so you may need to configure Sauce Connect with the
--proxyor
--paccommand line options.
Checking Network Connectivity to Sauce Labs
Make sure that saucelabs.com is accessible from the machine running Sauce Connect. This can be tested issuing a ping, telnet or cURL command to saucelabs.com from the machine's command line interface. If any of these commands fail please work with your internal network team to resolve them.
ping saucelabs.com
This command should return an IP address of 162.222.73.28
telnet saucelabs.com 443
This command should return a status message of "connected to saucelabs.com"
curl -v
For More Help
If you need additional help, get in touch at [email protected]. To provide our support team with additional information, please add the
-vv and
-l sc.log options to your Sauce Connect command line, reproduce the problem, and attach the resulting log file (called
sc.log) to your support request.
For more advance troubleshooting steps please refer to | https://docs.saucelabs.com/reference/sauce-connect/ | 2015-04-18T04:52:53 | CC-MAIN-2015-18 | 1429246633799.48 | [array(['/images/reference/sauce-connect/sc.37397d91.png',
'How is Sauce Secured'], dtype=object)
array(['/images/reference/sauce-connect/sc-setup-process.109eec43.png',
'SetupProcess'], dtype=object) ] | docs.saucelabs.com |
Difference between revisions of "How to turn off magic quotes gpc for Joomla 3"
From Joomla! Documentation
Latest revision as of 10:56, 27 May 2014
Contents
For MAMP
Steps:
- :
Another solution (for the hosts where PHP is running as FCGI module)
Works for PHP 5.3 and higher
- create a
.user.inifile at your Joomla! root.
- Add this content to the file and save
magic_quotes_gpc = Off | https://docs.joomla.org/index.php?title=How_to_turn_off_magic_quotes_gpc_for_Joomla_3&diff=cur&oldid=104558 | 2015-04-18T06:02:32 | CC-MAIN-2015-18 | 1429246633799.48 | [] | docs.joomla.org |
Revision history of "JCacheController/11.1"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 14:59, 10 May 2013 JoomlaWikiBot (Talk | contribs) moved page JCacheController/11.1 to API17:JCacheController without leaving a redirect (Robot: Moved page) | https://docs.joomla.org/index.php?title=JCacheController/11.1&action=history | 2015-04-18T05:19:59 | CC-MAIN-2015-18 | 1429246633799.48 | [] | docs.joomla.org |
[How Do I:] Work with Nested Master Pages to Create Standard Content Layouts
by Chris Pels
In this video Chris Pels will show how to use nested master pages to create individual master pages that represent different standard content layouts for a web site. First, see how several major commercial web sites use a standard set of content layouts. Next, see how to nest a master page within another master page, and use the design time support in Visual Studio 2008. Then, learn the considerations for establishing a "page architecture" which represents the major types of content layout used in a sample web site. Once that definition is complete see how to structure the nested master pages so developers can then select a master page, resulting in a standardized and consistent display of content for a web site.
▶ Watch video (30 minutes) | https://docs.microsoft.com/en-us/aspnet/web-forms/videos/how-do-i/how-do-i-work-with-nested-master-pages-to-create-standard-content-layouts | 2018-11-13T05:53:56 | CC-MAIN-2018-47 | 1542039741219.9 | [] | docs.microsoft.com |
This document lists frequently-asked questions developers about Blockstack application development. If you are new to Blockstack, you should read the general questions first.
For more technical FAQs about Blockstack Core nodes, the Stacks blockchain, and other architectural elements, see the entire set of technical FAQs.
If you have a technical question that gets frequently asked on the forum or Slack, feel free to send a pull-request with the question and answer.
- Who should build with the Blockstack Platform?
- I’m a web developer. Can I build on the Blockstack Platform?
- I’m a non-web developer. Can I build on Blockstack Platform?
- How do I get started using Blockstack to build decentralized applications?
- What’s the difference between a web app and a Blockstack app?
- Do I need to learn any new languages or frameworks?
- What is the general architecture of the Blockstack Platform?
- What is a
serverlessapp?
- How does my web app interact with Blockstack?
- What does blockstack.js do?
- How do I use blockstack.js?
- How do I register Blockstack IDs?
- How can I look up names and profiles?
- What kind of scalability and performance can I expect from applications built with Blockstack?
- Is there a limit to the file sizes I can store in a Gaia Storage System
- Can I run a Gaia Storage System commercially?
- Is the platform private or open sourced?
- What programming language can I use to build these apps?
- How is Blockstack different from Ethereum for building decentralized apps?
- Can Blockstack applications interact with Bitcoin? Ethereum? Other blockchains?
- How old is the Blockstack project?
- What is the current development roadmap look like?
- Where are the current core developers based? What are the requirements for being a core developer?
- I heard some companies working on Blockstack have raised venture capital, how does that impact the project?
Who should build with the Blockstack Platform?
Everyone! However, more seriously, if you are building an application in JavaScript that requires sign-in and storage, you should look at using Blockstack.
I’m a web developer. Can I build on the Blockstack Platform?
Yes! Blockstack is geared primarily towards web developers. All of your existing knowledge is immediately applicable to Blockstack. Anything you can do in a web browser, you can do in a Blockstack app.
I’m a non-web developer. Can I build on Blockstack Platform?.
How do I get started using Blockstack to build decentralized applications?!
What’s the difference between a web app and a Blockstack app?.
Do I need to learn any new languages or frameworks?
No. Blockstack applications are built using existing web frameworks and programming. The only new thing you need to learn is either blockstack.js or the Blockstack RESTful API.
What is the general architecture of the Blockstack Platform?.
What is a
serverless app?.
How does my web app interact with Blockstack?
The blockstack.js library gives any web application the ability to interact with Blockstack’s authentication and storage services. In addition, we supply a public RESTful API.
What does blockstack.js do?
This is the reference client implementation for Blockstack. You use it in your web app to do the following:
- Authenticate users
- Load and store user data
- Reuse users’ public data in your application
There are also mobile libraries for iOS and Android.
How do I use blockstack.js?
Our documentation has several examples you can use to get started.
How do I register Blockstack IDs?
You should use the Blockstack Browser.
How can I look up names and profiles?
You can use blockstack.js, or you can use the public Blockstack Core endpoint.
What kind of scalability and performance can I expect from applications built with Blockstack?
Blockstack uses the blockchain only for name registration. Data storage is kept off-chain in the Gaia Storage System. This basic application architecture means any application can perform and scale as they do without a blockchain.
Is there a limit to the file sizes I can store in a Gaia Storage System
The file size limit is 25 MB per file.
Can I run a Gaia Storage System commercially?
Yes, you can. Anyone interested in running a Gaia Storage System can run one and make it available to users.
Is the platform private or open sourced?
The project is open-source, and anyone can contribute! The major contributors are mostly employees of Blockstack PBC. You can see the full list of contributors here:
What programming language can I use to build these apps?.
How is Blockstack different from Ethereum for building decentralized apps?.
Can Blockstack applications interact with Bitcoin? Ethereum? Other blockchains?
Yes! Since Blockstack applications are built like web applications, all you need to do is include the relevant Javascript library into your application.?.
I heard some companies working on Blockstack have raised venture capital, how does that impact the project?. | https://docs.blockstack.org/core/faq_developer.html | 2019-11-12T03:19:55 | CC-MAIN-2019-47 | 1573496664567.4 | [] | docs.blockstack.org |
Surfaces / Slabs¶
Surfaces and Slabs are widely used crystallographic concepts. A general theoretical background is offered in the expandable section below for the readers unfamiliar with the topic. The rest of this documentation page explains how the surface/slab generation is implemented in the Materials Designer interface.
Theoretical background¶
Expand to view ...
Miller indices¶
Crystal surfaces are defined in terms of their Miller indices 1. Miller indices are a set of three integer numbers, conventionally expressed in the form (hkl), which constitutes a convenient shorthand notation to refer to an entire family of lattice planes in a crystal. In particular, a generic set of Miller indices (hkl) denotes the family of planes orthogonal to h\mathbf {b_{1}} +k\mathbf {b_{2}} +\ell \mathbf {b_{3}}, where \mathbf {b_{i}} are the basis of the reciprocal lattice vectors. However, it is important to caution that the so-defined plane is not always orthogonal to the linear combination of direct lattice vectors h\mathbf {a_{1}} +k\mathbf {a_{2}} +\ell \mathbf {a_{3}}, since the reciprocal lattice vectors need not be mutually orthogonal. It is therefore convenient to divide the Miller indices by their minimum common denominator.
Examples¶
Some examples of planes with different Miller index labels in cubic crystals are depicted below for reference and illustration purposes. Since the reciprocal lattice vectors are indeed mutually orthogonal in this particular cubic case, the (hkl) planes can effectively be taken to be always perpendicular to the (hkl) direction in the crystal relative to the conventional Cartesian coordinate system.
Surface vs Slab¶
Surface represents an isolated terminal edge of an infinite crystal, whereas a Slab has a finite size and two edges. In practice, when dealing with periodic boundary conditions, a surface is modeled by a slab that is long enough for the electronic states on the edges to be completely independent of each other.
Accessing the Surface / Slab generator¶
By clicking on the
Surface / slab option in the
Advanced menu, the surface/slab generator can be accessed. The user should expect to encounter the following dialog:
Constructing surfaces and slabs¶
Miller indices (hkl)¶
Start by entering the Miller indices (hkl) of the corresponding crystal plane. This information will section the selected crystal structure across the desired plane.
Thickness in number of layers¶
The desired "thickness in layers" greater than unity can be entered in this field.
Vacuum ratio¶
The vacuum ratio defines the portion of the vertical size (along the surface normal direction, "z") of the unit cell in a slab which is occupied by vacuum, as opposed to the maximum interatomic distance in this direction. The ration is thus measured as a fraction (from zero to one) of the total height of the corresponding unit cell.
Supercell dimensions in x and y¶
The vertical dimension of a slab or surface structure is normally taken to lie along the z direction. This leaves the basal plane of such structures to lie on the x-y plane of the Cartesian coordinate system. The number of times the crystal unit cell is repeated in both of these x and y basal dimensions is defined in the "Supercell dimension" fields.
NOTE: only integer values are supported for the supercell dimensions at the moment
Generating the surface / slab¶
Click "Submit" at the bottom of the generator dialog once all the above information has been entered, and the corresponding surface or slab structure will appear in the 3D graphical viewer of the Materials Designer interface.
Animation¶
In the animation below, we generate a crystalline slab of pure silicon along the (001) plane, composed of three layers of atoms in terms of thickness, and with a 10x10 supercell constituting its base. The final view of the crystal structure is finally adjusted with the help of the interactive features of the 3D viewer, described in this page, to better inspect the overall appearance of the slab with regards to its size, thickness and vacuum ratio relative to the enclosing unit cell.
Structural Metadata¶
Once a slab has been generated, and saved following the instructions outlined in this page, the user can retrieve the information about the settings that were used to generate the slab in the form of "metadata". Open the corresponding entry in the materials collection, and look for lines towards the bottom of the page starting with:
"metadata": ...
By expanding this section, the user will be able to retrieve all the relevant metadata associated with the original generation of the surface / slab.
NOTE: metadata is present for slabs / surfaces, and is used for performing the surface energy calculations. | https://docs.exabyte.io/materials-designer/header-menu/advanced/surface-slab/ | 2019-11-12T03:09:07 | CC-MAIN-2019-47 | 1573496664567.4 | [array(['../../../../images/materials-designer/Miller_Indices_Felix_Kling.png',
'Miller indices Miller indices'], dtype=object)
array(['../../../../images/materials-designer/surface-slab-generator.png',
'Accessing the Surface / Slab generator Accessing the Surface / Slab generator'],
dtype=object) ] | docs.exabyte.io |
Vitally.js
Vitally.js is a small Javascript library that allows you to directly send us data about your users and their product usage.
How to Install
Copy the code below and paste it into your product, right before the closing
</body> tag. You'll need to add your own unique token to the below
Vitally.init call. You can grab your token in the Integrations section of your Account Settings.
"];u<c.length;u++){o(c[u])}}(window,"Vitally"); Vitally.init('YOUR_TOKEN_HERE'); </script>
Once you install the script in your product, a
Vitally API will be attached to the global
window object.
Identifying the logged-in account & user
After you've installed the script, then once the user is logged in, you should identify both the user AND the account the user belongs to.
Vitally.account
Vitally.account allows you to identify the business account (i.e. organization) using your product. It is recommended to call this API first, before
Vitally.user.
Vitally.account({ accountId: 'account-id', traits: { name: "Bob's Burgers", // Add any other traits here you'd like to attach to your accounts in Vitally } });
Vitally.user
Vitally.user allows you to identify the user at the account that is currently using your product.
Vitally.user({ userId: 'user-id', accountId: 'account-id', traits: { name: 'Bob Belcher', email: '[email protected]', // Add any other traits here you'd like to attach to your users in Vitally } });
Sending 'account-id' is optional if you've already called
Vitally.account, but if possible, it is always best to be explicit.
// Only do this id you've called Vitally.account first Vitally.user({ userId: 'user-id', traits: { name: 'Bob Belcher', email: '[email protected]', // Add any other traits here you'd like to attach to your users in Vitally } });
Vitally.track
Vitally.track allows you to track the interactions the user has with your product.
Vitally.track({ event: 'help-option-clicked', userId: 'user-id', properties: { option: 'chat', // Add any other properties here you'd like to attach to the event in Vitall } }); | https://docs.vitally.io/en/articles/50-vitallyjs | 2019-11-12T03:18:55 | CC-MAIN-2019-47 | 1573496664567.4 | [] | docs.vitally.io |
]: In [1]: from dit.pid.distributions import bivariates, trivariates
We can see from inspection that either input (the first two indexes) is independent of the output (the final index), yet the two inputs together determine the output. One could call this “synergistic” information. Next, consider the giant bit distribution:
In [2]: In [4]: gb = bivariates['redundant'] In [3]: In [5]: print(gb) Class: Distribution Alphabet: ('0', '1') for all rvs Base: linear Outcome Class: str Outcome Length: 3 RV Names: None x p(x) 000 1/2 111 1/2 In [4]: Class: Distribution In [5]: Alphabet: ('0', '1') for all rvs ...: Base: linear ...: Outcome Class: str ...: Outcome Length: 3 ...: RV Names: None ...: File "<ipython-input-5-35057d4d19b8>", line 1 Alphabet: ('0', '1') for all rvs ^ SyntaxError: invalid syntax
Here, we see that either input informs us of exactly what the output is. One could call this “redundant” information. Furthermore, consider the coinformation of these distributions:
In [6]: In [6]: from dit.multivariate import coinformation as I [7]: In [9]: I(dit.example_dists.giant_bit(4, 2)) Out[7]: 1.0 In [8]: Out[9]: 1.0 In [9]: In [10]: I(dit.example_dists.n_mod_m(4, 2)) Out[9]: 1.0 In [10]:]: In [12]: d = dit.Distribution(['000', '011', '102', '113'], [1/4]*4) In [13]: In [13]: PID_WB(d) Out[13]: +--------+--------+--------+ | I_min | I_r | pi | +--------+--------+--------+ | {0:1} | 2.0000 | 1.0000 | | {0} | 1.0000 | 0.0000 | | {1} | 1.0000 | 0.0000 | | {0}{1} | 1.0000 | 1.0000 | +--------+--------+--------+ In [14]: ╔════════╤════════╤════════╗ ....: ║ I_min │ I_r │ pi ║ ....: ╟────────┼────────┼────────╢ ....: ║ {0:1} │ 2.0000 │ 1.0000 ║ ....: ║ {0} │ 1.0000 │ 0.0000 ║ ....: ║ {1} │ 1.0000 │ 0.0000 ║ ....: ║ {0}{1} │ 1.0000 │ 1.0000 ║ ....: ╚════════╧════════╧════════╝ ....: | http://docs.dit.io/en/latest/measures/pid.html | 2019-11-12T03:24:56 | CC-MAIN-2019-47 | 1573496664567.4 | [] | docs.dit.io |
Using Volume Shadow Copy (VSS) with DataKeeper/SIOS Protection Suite Volumes
On Windows 2008 R2, VSS Shadow Copy can be enabled for SIOS Protection Suite-protected (shared or replicated) volumes. However, the following guidelines apply:
- VSS snapshot images must not be stored on a SIOS Protection Suite-protected volume. Storing VSS snapshots on a SIOS Protection Suite-protected volume will prevent SIOS Protection Suite from being able to lock the volume and switch it over to another node.
- When a SIOS Protection Suite-protected volume is switched or failed over, any previous snapshots that were taken of the SIOS Protection Suite protected volume are discarded and cannot be reused.
- VSS snapshot scheduling is not copied between the SIOS Protection Suite servers. If snapshots are scheduled to be taken twice a day on the primary server and a switchover occurs, this schedule will not be present on the backup server and will need to be redefined on the backup server.
- There is a slight difference in behavior when switching back to a server where snapshots were previously enabled:
- If the volume is a shared volume, VSS snapshots must be re-enabled.
- If the volume is a replicated volume, VSS snapshots are automatically re-enabled.
Feedback
Thanks for your feedback.
Post your comment on this topic. | http://docs.us.sios.com/dkse/8.6.2/en/topic/volume-shadow-copy-vss?q=adding+a+resource+dependency | 2019-11-12T04:02:31 | CC-MAIN-2019-47 | 1573496664567.4 | [] | docs.us.sios.com |
Once a user authenticates and a DApp obtains authentication, the application interacts with Gaia through the blockstack.js library. There are two simple methods for working with data in Gaia hub: the
putFile() and
getFile() methods. This section goes into greater detail about the methods, how they interact with a hub, and how to use them.
Write-to and Read-from URL Guarantees
Gaia is built on a driver model that supports many storage services. So, with
very few lines of code, you can interact with providers on Amazon S3, Dropbox,
and so forth. The simple
getFile() and
putFile() interfaces are kept simple
because Blockstack assumes and wants to encourage a community of
open-source-data-management libraries.
The performance and simplicity-oriented guarantee of the Gaia specification is
that when an application submits a write-to URL, the application is guaranteed to
be able to read from the URL. Note that, while the
prefix in the write-to url (for example,
myhub.service.org/store) and the read-from URL
() are different, the
foo/bar suffixes are the same.
By default,
putFile() encrypts information while
getFile() decrypts it by default. Data stored in an encrypted format means only the user that stored it can view it. For applications that want other users to view data, the application should set the
encrypt option to
false. And, corresponding, the
decrypt option on
getFile() should also be
false.
Consistent, identical suffixes allow an application to know exactly where a
written file can be read from, given the read prefix. The Gaia service defines a
hub_info endpoint to obtain that read prefix:
GET /hub_info/
The endpoint returns a JSON object with a
read_url_prefix, for example, if my service returns:
{ ..., "read_url_prefix": "" }
The data be read with this
getFile() and this address:
The application is guaranteed that the profile is written with
putFile() this request address:
When you use the
putFile() method it takes the user data and POSTs it to the user’s Gaia storage hub. The data POSTs directly to the hub, the blockchain is not used and no data is stored there. The limit on file upload is currently 25mb.
Address-based access-control
Access control in a Gaia storage hub is performed on a per-address basis.
Writes to URLs
/store/<address>/<file> are allowed only if the writer can
demonstrate that they control that address. This is achieved via the
authentication token which is a message signed by the private key associated
with that address. The message itself is a challenge text, returned via the
/hub_info/ endpoint. | https://docs.blockstack.org/storage/write-to-read.html | 2019-11-12T04:25:02 | CC-MAIN-2019-47 | 1573496664567.4 | [] | docs.blockstack.org |
Error message "Cannot open the listening port."
This error occurs when the UDP port (default 5060) cannot be opened by Brekeke SIP server.
Other SIP clients, such as SIP softphones, may be running on the UDP port required by Brekeke SIP Server. To resolve this error, terminate all SIP clients and restart Brekeke SIP Server. | https://docs.brekeke.com/sip/error-message-cannot-open-the-listening-port | 2019-11-12T03:43:19 | CC-MAIN-2019-47 | 1573496664567.4 | [] | docs.brekeke.com |
connection is successful.
The connections determine the topology of the PlayableGraph and how it is evaluated.
Playables can be connected together to form a tree structure. Each Playable has a set of inputs and a set of outputs. These can be viewed as “slots” where other Playables can be attached to.
When a Playable is first created, its input count is reset to 0, meaning that it has no children Playables attached. Outputs behave a little differently—every Playable has a default output created when first created.
You connect Playables together using the method PlayableGraph.Connect, and you can disconnect them from each other using PlayableGraph.Disconnect.
There is no limit set on the amount of inputs a Playable can have.
using UnityEngine; using UnityEngine.Animations; using UnityEngine.Playables;
public class GraphCreationSample : MonoBehaviour { PlayableGraph m_Graph; public AnimationClip clipA; public AnimationClip clipB;
void Start() { // Create the PlayableGraph. m_Graph = PlayableGraph.Create();
// Add an AnimationPlayableOutput to the graph. var animOutput = AnimationPlayableOutput.Create(m_Graph, "AnimationOutput", GetComponent<Animator>());
// Add an AnimationMixerPlayable to the graph. var mixerPlayable = AnimationMixerPlayable.Create(m_Graph, 2, false);
// Add two AnimationClipPlayable to the graph. var clipPlayableA = AnimationClipPlayable.Create(m_Graph, clipA); var clipPlayableB = AnimationClipPlayable.Create(m_Graph, clipB);
// Create the topology, connect the AnimationClipPlayable to the // AnimationMixerPlayable. m_Graph.Connect(clipPlayableA, 0, mixerPlayable, 0); m_Graph.Connect(clipPlayableB, 0, mixerPlayable, 1);
// Use the AnimationMixerPlayable as the source for the AnimationPlayableOutput. animOutput.SetSourcePlayable(mixerPlayable);
// Set the weight for both inputs of the mixer. mixerPlayable.SetInputWeight(0, 1); mixerPlayable.SetInputWeight(1, 1);
// Play the graph. m_Graph.Play(); }
private void OnDestroy() { // Destroy the graph once done with it. m_Graph.Destroy(); } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/ScriptReference/Playables.PlayableGraph.Connect.html | 2019-11-12T04:29:35 | CC-MAIN-2019-47 | 1573496664567.4 | [] | docs.unity3d.com |
TOPICS×
Troubleshooting
You can find in this section common questions related to Dynamic reporting.
For Unique opens and Unique clicks, the count in the aggregate row is not matching the ones in individual rows
This is an expected behavior. We can take the following example to explain this behavior.
An email is sent to profiles P1 and P2.
P1 opens the email twice on the first day and then tree times on the second day.
Whereas, P2 opens the email once on the first day and doesn't reopen it in the following days. Here is a visual representation of the profiles' interaction with the sent email:
To understand the overall number of unique opens, we need to sum up the row counts of Unique Opens which gives us the value 3. But since the email was targeted to only 2 profiles, the Open rate should show 150%.
To not obtain percentage higher than 100, the definition of Unique Opens is maintained to be the number of unique broadlogs that were opened. In this case even if P1 opened the email on Day 1 and Day 2, his unique opens will still be 1.
This will result in the following table:
Unique counts are based on an HLL-based sketch, this may cause slight inaccuracies at large counts.
Open counts do not match the Database count
This may be due to the fact that, heuristics are used in Dynamic reporting to track opens even when we can't track the Open action.
For example, if a user has disabled images on their client and click on a link in the email, the Open may not be tracked by the database but the Click will.
Therefore, the Open tracking logs counts may not have the same count in the database.
Such occurrences are added as "an email click implies an email open" .
Since unique counts are based on an HLL-based sketch, minor inconsistencies between the counts can be experienced.
How are counts for recurring/transactional deliveries calculated?
When working with recurring and transactional deliveries, the counts will be attributed to both the parent and child deliveries.
We can take the example of a recurring delivery named R1 set to run every day on day 1 (RC1), day 2 (RC2) and day 3 (RC3).
Let's assume that only a single person opened all the child deliveries multiple times. In this case, the individual recurring child deliveries will show the Open count as 1 for each.
However, since the same person clicked on all the deliveries, the parent recurring delivery will also have Unique open as 1.
After the Adobe Campaign Standard 19.2.1 release, the definition of Unique counts is changed from Number of unique persons interacting with the delivery to Number of unique messages interacted .
Before the Adobe Campaign Standard 19.2.1 release, reports looked like the following:
After the Adobe Campaign Standard 19.2.1 release, reports look like the following:
What is the colors' signification in my reports' table?
Colors displayed on your reports are randomized and cannot be personalized. They represent a progress bar and are displayed to help you better highlight the maximal value reached in your reports.
In the example below, the cell is of the same color since its value is 100%.
If you change the Conditional formatting to custom, when the value reaches the upper limit the cell will get greener. Whereas, if it reaches the lower limit, it will get redder.
For example, here, we set the Upper limit to 500 and **Lower limit** to 0.
Why does the value N/A appear in my reports?
The value N/A can sometimes appear in your dynamic reports. This can be displayed for two reasons:
- The delivery has been deleted and is shown here as N/A to not cause discrepancy in the results.
- When drag and dropping the Transactional Delivery dimension to your reports, the value N/A might appear as a result. This happens because Dynamic report fetches every delivery even if they are not transactional. This can also happen when drag and dropping the Delivery dimension to your report but in this case, the N/A value will represent transactional deliveries. | https://docs.adobe.com/content/help/en/campaign-standard/using/reporting/about-reporting/troubleshooting.html | 2019-11-12T02:52:56 | CC-MAIN-2019-47 | 1573496664567.4 | [array(['/content/dam/help/campaign-standard.en/help/reporting/using/assets/troubleshooting_1.png',
None], dtype=object)
array(['/content/dam/help/campaign-standard.en/help/reporting/using/assets/troubleshooting_2.png',
None], dtype=object)
array(['/content/dam/help/campaign-standard.en/help/reporting/using/assets/troubleshooting_3.png',
None], dtype=object) ] | docs.adobe.com |
{"last_updated":"Fri Nov 1 09:07:59 PST 2019","faqs":[{"category":"general","question":"What is Blockstack?","answer":". Led by some of the world’s foremost experts on distributed systems, Blockstack allows users to own their own data that they can take with them from app to app in the ecosystem, along with their Blockstack ID that eliminates the need for password-based logins. The end result is privacy, security, and freedom."},{"category":"general","question":"What is decentralized computing?","answer":"
Decentralized computing has these attributes:
Blockstack technology is a shift in how people use software, create it, and benefit from the internet. It puts people back in control of the computing systems that manage today’s world."},{"category":"general","question":"What is the Blockstack Ecosystem?","answer":":
Any person or organization working with the Blockstack technology in the open-source ecosystem is considered a part of it. Other than the above entities there are 80+ independent organization and apps built by teams of developers that are part of the Blockstack Ecosystem."},{"category":"general","question":"What is a decentralized internet?","answer":"."},{"category":"general","question":"What can developers achieve with the Blockstack Ecosystem?","answer":"
The Blockstack Ecosystem is working to enable developers to build software that protects users’ digital rights. This new kind of software is known as decentralized applications or DApps.
DApps use blockchain technology. Where Bitcoin is a decentralized value exchange on a blockchain, DApps."},{"category":"general","question":"What problems do Blockstack DApps solve for me as a user?","answer":"
Applications developed with Blockstack’s technology run like the traditional, web applications you know. Unlike traditional, web applications, DApps avoid abusing users by adhering to the following principles:
unlockit to view.."},{"category":"appusers","question":"What is a decentralized application or DApp?","answer":"
Decentralized applications or DApps are a new type of software application built with blockchain technology. Where Bitcoin is a decentralized value exchange on a blockchain, DApps use blockchain technology for more than value exchange; they use a blockchain to exchange data and support application interactions."},{"category":"appusers","question":"How do DApps differ from applications I typically use?","answer":"
DApps (decentralized applications) differ from Web applications in these key ways:
Yes! DApps run in the web browsers (Chrome, Safari, Internet Explorer, and so forth) you know today."},{"category":"appusers","question":"What is an identity or ID or Blockstack identity?","answer":" a DApp, you give the DApp permission to read your data and write to your data store on your behalf. When you log out of an application, it no longer has access to your data or data store — until the next time you log in with your identity."},{"category":"appusers","question":"Do I need to keep my identity secret?","answer":"
No. You can tell people your identity just as you tell them your name. What you need to secure and protect is your secret key."},{"category":"appusers","question":"How do I get an identity? Is it free?","answer":"."},{"category":"appusers","question":"If I forget my identity or my lose my secret key, can Blockstack help me?","answer":"."},{"category":"appusers","question":"Where is my identity kept?","answer":"
When you create an identity, your id and your private key are hashed (encrypted) and registered on Blockstack’s blockchain. The data you create through your identity is encrypted and kept off the blockchain in your data storage."},{"category":"appusers","question":"Can Blockstack delete my Blockstack ID or deny me use of it?","answer":"."},{"category":"appusers","question":"Can I get an identity without the Blockstack in the name, like steve.id?","answer":"."},{"category":"appusers","question":"Can I have more than one identity?","answer":"
Yes, you can create as many identities as you want."},{"category":"appusers","question":"Do identities last forever or do they expire?","answer":"."},{"category":"appusers","question":"Why do DApps ask me for an email in addition to an identity?","answer":"
Your email is not kept by DApps or by Blockstack. It is stored in your browser client’s local web storage. (See the question about data storage for more information about web storage.) When you are logged into a DApp, it can use your email to send you any information you need to operate the DApp. When you log out, your email is no longer available to the DApp."},{"category":"appusers","question":"Where can I find Blockstack DApps that I can use?","answer":"
You can see a list on the App.co site. Alternatively, you can go directly to your Blockstack Browser home page."},{"category":"appusers","question":"What is the Blockstack Browser?","answer":"
The Blockstack Browser is a DApp used to create and manage Blockstack identities. To a user, it looks just like another tab in a standard browser. From the Blockstack Browser tab, you can find DApps to try, update settings related to your identity and storage – and much more.
Developers use the Blockstack Browser to handle login requests from DApps. From a Blockstack DApp, a user chooses the Log In with Blockstack button. Clicking this button sends users to a Blockstack Browser dialog. This dialog asks users to allow the DApp to access their data."},{"category":"appusers","question":"Do Blockstack DApps work with my web browser?","answer":"
Yes! DApps using Blockstack run in the web browsers you know and love (Chrome, Safari, Firefox, and Edge). Blockstack DApps are web applications; they happen to use the blockchain. DApps are just as fast as traditional web applications, often more so. If you use our Web-hosted Blockstack Browser, you can get started using DApps right away."},{"category":"appusers","question":"Is there a downloadable version of the Blockstack Browser?","answer":"
Yes. You can download a desktop version of the Blockstack Browser here."},{"category":"appusers","question":"What is Gaia?","answer":"
The Gaia Storage System is a feature of the Blockstack Platform. Developers or organizations can use the Gaia Storage System to create a data storage provider. Users choose a data storage provider when they create an identity."},{"category":"appusers","question":"Where is the data about me kept or what is a data store?","answer":"."},{"category":"appusers","question":"What kind of data does a Blockstack DApp keep about me?","answer":"
Blockstack does not keep any data about you. When you login into an application, you are asked to provide an email. That email is in your browser’s web storage; it doesn’t leave your device (computer or phone). When you reset the Blockstack Browser or clear your browser’s web storage, the local storage and your email are removed."},{"category":"appusers","question":"What is a data storage provider?","answer":".
Currently, moving your data from one storage provider to another is not supported via the UI. You can do this move with assistance from Blockstack."},{"category":"dappdevs","question":"Who should build with the Blockstack Platform?","answer":"
Everyone! However, more seriously, if you are building an application in JavaScript that requires sign-in and storage, you should look at using Blockstack."},{"category":"dappdevs","question":"I’m a web developer. Can I build on the Blockstack Platform?","answer":"
Yes! Blockstack is geared primarily towards web developers. All of your existing knowledge is immediately applicable to Blockstack. Anything you can do in a web browser, you can do in a Blockstack app."},{"category":"dappdevs","question":"I’m a non-web developer. Can I build on Blockstack Platform?","answer":"."},{"category":"dappdevs","question":"How do I get started using Blockstack to build decentralized applications?","answer":"!"},{"category":"dappdevs","question":"What’s the difference between a web app and a Blockstack app?","answer":"."},{"category":"dappdevs","question":"Do I need to learn any new languages or frameworks?","answer":"
No. Blockstack applications are built using existing web frameworks and programming. The only new thing you need to learn is either blockstack.js or the Blockstack RESTful API."},{"category":"dappdevs","question":"What is the general architecture of the Blockstack Platform?","answer":"."},{"category":"dappdevs","question":"What is a
serverlessapp?","answer":"."},{"category":"dappdevs","question":"How does my web app interact with Blockstack?","answer":"
The blockstack.js library gives any web application the ability to interact with Blockstack’s authentication and storage services. In addition, we supply a public RESTful API."},{"category":"dappdevs","question":"What does blockstack.js do?","answer":"
This is the reference client implementation for Blockstack. You use it in your web app to do the following:
There are also mobile libraries for iOS and Android."},{"category":"dappdevs","question":"How do I use blockstack.js?","answer":"
Our documentation has several examples you can use to get started."},{"category":"dappdevs","question":"How do I register Blockstack IDs?","answer":"
You should use the Blockstack Browser."},{"category":"dappdevs","question":"How can I look up names and profiles?","answer":"
You can use blockstack.js, or you can use the public Blockstack Core endpoint."},{"category":"dappdevs","question":"What kind of scalability and performance can I expect from applications built with Blockstack?","answer":"
Blockstack uses the blockchain only for name registration. Data storage is kept off-chain in the Gaia Storage System. This basic application architecture means any application can perform and scale as they do without a blockchain."},{"category":"dappdevs","question":"Is there a limit to the file sizes I can store in a Gaia Storage System","answer":"
The file size limit is 25 MB per file."},{"category":"dappdevs","question":"Can I run a Gaia Storage System commercially?","answer":"
Yes, you can. Anyone interested in running a Gaia Storage System can run one and make it available to users."},{"category":"dappdevs","question":"Is the platform private or open sourced?","answer":"
The project is open-source, and anyone can contribute! The major contributors are mostly employees of Blockstack PBC. You can see the full list of contributors here:"},{"category":"dappdevs","question":"What programming language can I use to build these apps?","answer":"."},{"category":"dappdevs","question":"How is Blockstack different from Ethereum for building decentralized apps?","answer":"."},{"category":"dappdevs","question":"Can Blockstack applications interact with Bitcoin? Ethereum? Other blockchains?","answer":"
Yes! Since Blockstack applications are built like web applications, all you need to do is include the relevant Javascript library into your application."},{"category":"appminers","question":"What is App Mining?","answer":"
Traditionally began on."},{"category":"appminers","question":"Is App Mining internationally accesible?","answer":"
Yes, App Mining is internationally available. The reviewers today cater to an English-speaking audience. The long term vision for app mining entails developing a plan to make it more internationally accessible. We will continue to update the guidelines and FAQ as the program evolves."},{"category":"appminers","question":"What is the App Mining timeline?","answer":"
On the first of every month (or as listed on the calendar):
Over the next two weeks:
On the last day of ranking at 11:59pm ET: App mining results are sent to Blockstack by app reviewer partners.
On the 15th (or as listed on the calendar): Blockstack team performs App Mining algorithm as referenced here.
On the following weekday:
A week later:
The following calendar shows events each month related to App Mining:"},{"category":"appminers","question":"How much can I earn and how are rewards distributed? ","answer":"
Starting in August 2019, App Mining payouts will include an additional $100K payout in Stacks tokens (STX) on top of the existing $100k in BTC. And starting in November 2019 we plan to ramp the total monthly payout and switch entirely to paying out in Stacks tokens. By May 2020, we plan for the monthly payout to be $1M worth of Stacks tokens. App Mining rewards in STX earned before the hard fork will be accrued, and the tokens will be distributed following the hard fork. We expect the hard fork to occur approximately 30-60 days following the end of the term of the RegA+ cash offering.
For more detailed information, see How payouts are administered in the Blockstack documentation."},{"category":"appminers","question":"What is the Stacks token?","answer":".
Learn more at:"},{"category":"appminers","question":"How is App Mining different from cryptocurrency mining?","answer":"
Traditionally the term mining in cryptocurrency refers to the process of contributing compute resources to the network and earning a distributed of new tokens as a reward. On the Stacks blockchain, developers can
mine by contributing apps to the ecosystem and making applications the community wants...
Yes, it does."},{"category":"appminers","question":"How do I submit my application for App Mining?","answer":"."},{"category":"appminers","question":"When are my submission materials due?","answer":"
Your submission materials are due at 11:59 PM Eastern Time US on the last day of each month. For example, if you are making a submission for March, your materials must be submitted on or before Feb 28 at 11:59 EST."},{"category":"appminers","question":"How often can I submit my application for App Mining?","answer":"
You need only to submit your application once. Each month after your submission, your app is competing in App Mining."},{"category":"appminers","question":"Does my code repository need to be public?","answer":"
Your application code can be public or private."},{"category":"appminers","question":"Is Blockstack Auth difficult to integrate?","answer":"
If you’re already building your app with JavaScript, adding Blockstack authentication is easy! We have docs, tutorials, and thorough API references for you to learn from. Visit the Zero-to-Dapp tutorial for end-to-end training. Or use this short example.
If you’re developing a traditional server-side application, you can still take advantage of Blockstack authentication. For an example, check out our Ruby on Rails gem."},{"category":"appminers","question":"Who are the app reviewers?","answer":".
The app reviewers are TryMyUI, Awario, and the New Internet Labs. See here for more details about each reviewer. Future reviewers could expand to community election. Please see our GitHub repository to raise issues or make suggestions for App Mining."},{"category":"appminers","question":"How are apps ranked?","answer":"
App reviewers have a proprietary methodology that helps them make objective judgments for why one app might be better than another. Each app reviewers determines the data, formula, and personnel they wish to utilize. Reviewers must publish their methodology periodically to ensure transparency.
To learn more see the detailed explanation of our ranking algorithm on our documentation.
Apps are ranked by all app reviewers."},{"category":"appminers","question":"When are the winning payments made?","answer":"
Payouts are made on the 15th of every month."},{"category":"appminers","question":"What are examples of any quantitative metrics that may be shared with app reviewers?","answer":"
Qualitative metrics are metrics that evaluate elements such as engagement, DAU/MAU ratios, etc. from the reviewed apps. Blockstack plans to incorporate metrics based ranking. However, before we do, any mechanism must thoughtfully incorporate the digital privacy rights of Blockstack users, and provide information in a way that cannot be gamed."},{"category":"appminers","question":"Is App Mining Decentralized?","answer":"
Given the pioneering nature of the program, we are being careful and starting in a somewhat centralized fashion that allows for necessary diligence in the early stages, for example, the current pilot phase.."},{"category":"appminers","question":"How is App Mining protected against bribery, collusion, or gaming?","answer":"."},{"category":"appminers","question":"How do I propose and follow changes to App Mining?","answer":"
The App Mining GitHub repo is the best place to propose changes to App Mining."},{"category":"appminers","question":"App Mining Disclaimer","answer":"
The App Mining FAQs contain forward-looking statements, including statements regarding Blockstack PBC’s plans for its App Mining program.."},{"category":"coredevs","question":"What is Blockstack Core?","answer":"
Blockstack Core is the reference implementation of the Blockstack protocol described in our white paper. Today, it consists of these components:
The next version of the Stacks Blockchain is under active development in the Rust programming language and employs our Tunable Proof-of-Work consensus algorithm and a unique scaling solution that enable individual apps to create their own app chains."},{"category":"coredevs","question":"How will the next version of Blockstack Core change?","answer":"
The next version of Blockstack core will incorporate smart contacts and do away with the virtual chain. This next version is expected toward the end of the year and will contain these components:."},{"category":"coredevs","question":"Can anyone register names on Blockstack?","answer":"
Anyone can register a name or identity on Blockstack — you just need to use a Blockstack client to submit a registration request to a Blockstack core node. All of that software is Open-Source and can be run without any centralized parties."},{"category":"coredevs","question":"What is a Blockstack Subdomain?","answer":".
Blockstack subdomains can be obtained without spending Bitcoin by asking a subdomain registrar to create one for you."},{"category":"coredevs","question":"Is there a Blockstack name explorer?","answer":"
Yes! It’s at"},{"category":"coredevs","question":"Why should I trust the information, like name ownership or public key mappings, read from Blockstack?","answer":"."},{"category":"coredevs","question":"Do you have a testnet or sandbox to experiment with Blockstack?","answer":"."},{"category":"coredevs","question":"Which coin fuels the Blockstack blockchain?","answer":"."},{"category":"coredevs","question":"How is the Blockstack network upgraded over time? What parties need to agree on an upgrade?","answer":"."},{"category":"coredevs","question":"Who gets the registration fees for name registrations?","answer":"."},{"category":"coredevs","question":"Do I need to run a full Blockstack node to use Blockstack?","answer":"
No. We maintain a fleet of Blockstack Core nodes that your Blockstack applications can connect to by default."},{"category":"coredevs","question":"Can I run my own Blockstack node?","answer":"."},{"category":"coredevs","question":"What is the capacity per block for registrations using Blockstack?","answer":"
Initial registrations can be done at an order of hundreds per block, and once an identity is registered, you can do
unlimited updates to the data because that is off-chain.
Blockstack applications do not currently have access to the user’s wallet. Users are expected to register Blockstack IDs themselves.
However, if you feel particularly ambitious, you can do one of the following:
Set up a
blockstack api endpoint (see the project README) and write a program to automatically register names. Also, see the API documentation for registering names on this endpoint.
Write a
node.js program that uses blockstack.js to register names. This is currently in development.
Yes! Once you deploy your own subdomain registrar, you can have your web app send it requests to register subdomains on your Blockstack ID. You can also create a program that drives subdomain registration on your Blockstack ID."},{"category":"coredevs","question":"What language is the Blockstack software written in?","answer":"
The current Stacks chain is implemented in Python 2. The next version of the Stacks chain will be written in Rust.
Our libraries and many of our developer tools are written in Javascript (and we’re working on porting much of our Javascript to TypeScript)."},{"category":"coredevs","question":"What if the current companies and developers working on Blockstack disappear, would the network keep running?","answer":"."},{"category":"opensource","question":"How old is the Blockstack project?","answer":"
Work on the project started in late 2013. First public commits on the code are from Jan 2014. The first registrar for Blockstack was launched in March 2014, and the project has been growing since then."},{"category":"opensource","question":"What is the current development roadmap look like?","answer":"
See this page for the current development roadmap."},{"category":"opensource","question":"Where are the current core developers based? What are the requirements for being a core developer?","answer":"."},{"category":"opensource","question":"I heard some companies working on Blockstack have raised venture capital, how does that impact the project?","answer":"."},{"category":"miscquest","question":"What’s the difference between Onename and Blockstack?","answer":"."},{"category":"miscquest","question":"Does Blockstack use a DHT (Distributed Hash Table)?","answer":"
It does not, as of November 2016. It uses a much more reliable system called the Atlas Network. Details here:"},{"category":"miscquest","question":"How can I transfer my Stacks Token out of my Stacks Wallet?","answer":"."},{"category":"wallet","question":"What is a seed phrase?","answer":"
When you create a wallet address, you also create a seed phrase. With one significant exception, a seed phrase is similar to a banking pin in that it gives you access to your wallet and your token allocation. Unlike a pin, if you lose your seed phrase, you can never access your wallet or your token allocation ever again.
Warning: Losing a seed phrase is final and destructive. Blockstack does not keep a copy of your seed. Blockstack cannot give you a new seed, get your access to your wallet, or return your tokens if you lose your seed phrase.
Keep your seed phrase secret. Just as with a banking pin, anyone that knows or steals your seed phrase can access your allocation.
You should write your seed phrase down and store the paper you write on in at least two secure locations. A safe or lockbox is a good location. You can also store it online in an encrypted password manager such as 1Password. You should never simply store a seed phrase in Apple Cloud or Dropbox."},{"category":"wallet","question":"I have lost my seed phrase, what can I do?","answer":"
Your seed phrase is a 24-word combination that was given to you during the setup of your Stacks Wallet. Unfortunately, as noted during the Stacks Wallet setup, there is no way we can recover your 24-word seed phrase for you."},{"category":"wallet","question":"How do I keep my tokens secure?","answer":"
The safety of your Stacks tokens is extremely important to us. To help ensure that you complete the process of receiving your tokens correctly and securely, please read the following guidelines:
Website Safety: When inputting data on a website or verifying the presence of data on a website, it is essential that you confirm that the URL is correct and that your browser is using HTTPS.
Email Safety: Blockstack will never ask for your personal identifying information over email, or call you directly. When we ask verifying questions, we will only do so when you call us to confirm your identity. We will never ask you for money or your Seed Phrase (private key).
If you have large token holdings, make sure you take advantage of custodial services. A wallet does not provide the security of a custodial service."},{"category":"wallet","question":"Where can I access the Stacks Wallet?","answer":"
The Stacks Wallet is available for download at wallet.blockstack.org."},{"category":"wallet","question":"What is a public Stacks Wallet address?","answer":"
During the initial grant process, investors submitted a public Stacks Wallet address to Blockstack. This is a string of letters and numbers starting with an ‘SP’ or SM’; for example,
SP017AUV5YRM7HT3TSQXJF7FCCYXETAB276BQ6XY is a wallet address.
If you purchased Stacks tokens through stackstoken.com using CoinList, you can find your address at CoinList. If you submitted your Stacks address directly to Blockstack, you can either use the
Restore from Seed Phrase feature on the Stacks Wallet or contact us at [email protected] for help.
Currently, the only software wallet that supports Stacks is the Blockstack Wallet software."},{"category":"wallet","question":"Can I use the older version 1 Blockstack Wallet?","answer":"
Version 1 of the Blockstack Wallet software was a read-only wallet. To view Stacks balances, send or receive Stacks you need to use the latest version 3 wallet. You can use the seed phrase you created with the old wallet with the new version."},{"category":"wallet","question":"Can I send Bitcoin to my Stacks Wallet?","answer":"
No, you cannot send Bitcoin to a Stacks address. You can only add Bitcoin as fuel to the wallet. Please follow the instructions for adding "gas"."},{"category":"wallet","question":"How do I get help with my wallet from a person?","answer":"
For questions or help regarding the Stacks token, you can contact us at [email protected]."},{"category":"tokens","question":"How do I check the status of my previously purchased Stacks tokens?","answer":"
You may check the status of previously purchased Stacks tokens at the Blockstack Explorer. Additional wallet-related information is available here."},{"category":"tokens","question":"How do I check my STX balance?","answer":"."},{"category":"tokens","question":"Is there an exchange where I can buy and sell Stacks tokens?","answer":"
Stacks tokens will be available for secondary trading on Binance's global exchange or Hashkey Pro's institutional exchange.
Only non-US persons located outside of the US are eligible to buy or sell STX on these exchanges, subject to your country's laws."},{"category":"tokens","question":"When will Stacks tokens be available on an exchange for US residents?","answer":"."},{"category":"tokens","question":"If I am already a Stacks token holder, how does the listing on Binance affect me?","answer":"."},{"category":"tokens","question":"Can I sell or transfer Stacks tokens directly to someone else?","answer":"."},{"category":"tokens","question":"How can I use Stacks within the Blockstack Ecosystem?","answer":"."},{"category":"tokens","question":"What types of transfer or time locks are Stacks tokens subject to?","answer":"."},{"category":"tokens","question":"What types of transfer or time locks apply to the current and future sales of Stacks tokens?","answer":"."},{"category":"tokens","question":"What is the current circulating supply of Stacks?","answer":"."},{"category":"tokens","question":"How will mining affect the supply of Stacks tokens?","answer":"."},{"category":"tokens","question":"How much money has Blockstack raised?","answer":"."},{"category":"tokens","question":"What has been the recent traction on the network?",."},{"category":"tokens","question":"How were tokens distributed to early investors?","answer":"."}]} | https://docs.blockstack.org/faqs/allfaqs.json | 2019-11-12T02:47:25 | CC-MAIN-2019-47 | 1573496664567.4 | [] | docs.blockstack.org |
Simulate mode¶
In this mode, Hoverfly uses its simulation data in order to simulate external APIs. Each time Hoverfly receives a request, rather than forwarding it on to the real API, it will respond instead. No network traffic will ever reach the real external API.
The simulation can be produced automatically via by running Hoverfly in Capture mode, or created manually. See Simulations for information. | https://docs.hoverfly.io/en/latest/pages/keyconcepts/modes/simulate.html | 2019-11-12T02:49:52 | CC-MAIN-2019-47 | 1573496664567.4 | [array(['../../../_images/simulate.mermaid.png',
'../../../_images/simulate.mermaid.png'], dtype=object)] | docs.hoverfly.io |
Streamlio Cloud Preview provides an easy way to try out and learn about Streamlio Cloud, the Streamlio-managed streaming and messaging service powered by Apache Pulsar. Streamlio Cloud Preview provides access to all of the core functionality of Streamlio Cloud and Apache Pulsar without any setup or configuration required.
This short guide will help you understand what is provided in Streamlio Cloud Preview and how to make use of it. Included in this guide:
Tips on how to get started with Streamlio Cloud Preview
A guide to using Streamlio Cloud Preview including the management interface, connectors, and monitoring capabilities
For more detailed information about Streamlio Cloud and Apache Pulsar, see the Streamlio Cloud User Guide and Apache Pulsar documentation. If you are interested in trying out Streamlio Cloud Preview, you can register for an account online at cloud.streamlio.com/signup.
Streamlio Cloud Preview is a sandbox deployment of the Streamlio Cloud service. It supports the core functionality of Streamlio Cloud and Apache Pulsar for experimentation and learning about Streamlio Cloud and Apache Pulsar.
There are a few important differences between Streamlio Cloud Preview and Streamlio Cloud including the following (see the sections further below for more details):
Not for production. We do not provide official customer support for Streamlio Cloud Preview--it is provided "as is" for learning and education purposes only.
Not for heavy workloads. Streamlio Cloud Preview has resource limits applied. As a result, Streamlio Cloud Preview is not appropriate for performance testing, large scale workloads, production or staging environments, etc. See below for more details on specific limits.
Management interfaces. Some Pulsar CLI functionality will work with Streamlio Cloud Preview, however Streamlio Cloud Preview does not support all functions of the Apache Pulsar nor Streamlio CLI interfaces.
We are definitely interested in hearing about your experience with Streamlio Cloud Preview. If you are interested in our production-ready offering, please take a look at Streamlio Cloud for more information.
Streamlio Cloud Preview supports the core functionality of Streamlio Cloud and Apache Pulsar such as the following:
Creation of topics and subscriptions
Ingestion of data to Pulsar topics
Output of data from a Pulsar topic to an external sink
Deployment of Pulsar Functions to process data in topics
Creation of custom producers and consumers using the Apache Pulsar client libraries
Streamlio Cloud Preview does not provide the following Streamlio Cloud and/or Apache Pulsar functionality:
Ability to create and use multiple namespaces
User-managed scaling of brokers, bookies, and function workers
Tiered storage
Cross-cluster replication
SQL querying of data in topics
Create and manage users
Streamlio Cloud Preview has the following limits:
Namespaces: your Streamlio Cloud Preview account has access to one preconfigured namespace
Topics: you can create up to 10 topics in Streamlio Cloud Preview
Pulsar Functions: up to 2 Pulsar Functions can be deployed
Sources: you can create up to two data sources
Sinks: up to 2 data sinks can be created
Data retention: individual messages are retained for up to 24 hours, after which they are automatically deleted
Inactive accounts: accounts that have not been accessed for more than 30 consecutive days will be reset and all objects created in those accounts (including topics, functions, sources, sinks, etc.) will be deleted
_____ Copyright 2019 Streamlio, Inc. Apache, Apache BookKeeper, Apache Pulsar and associated open source project names are trademarks of the Apache Software Foundation. | https://docs.streaml.io/streamlio-cloud/preview | 2019-11-12T02:46:19 | CC-MAIN-2019-47 | 1573496664567.4 | [] | docs.streaml.io |
Complete the following guide to upgrade from TUNE Android SDK 4.x to 4.8 and above:
- If you are already using In-App Marketing, skip this step and continue to step 2. Otherwise complete the following 3 steps to integrate TUNE into your app:
- Add a custom Application class or update your Application class so that it either extends TuneApplication or that you add our activity lifecycle callbacks to it.
- Update your AndroidManifest.xml Application tag to reference your custom Application class.
<application android:
- Tune.init needs to be called inside your Application's onCreate method.
- IF you support API 14 or below, update all of your Activities to extend TuneActivity OR add TUNE start/resume/stop, etc. calls. For more information about updating activities, see our Getting Started article on using API 14 or below as your minimum SDK.
- Remove all calls to tune.measureSession and tune.setReferralSource. TUNE calls this automatically when your Activities resume. measureSession and setReferralSource are deprecated and will be removed in the 5.0 version of the TUNE Android SDK.
- Any calls to tune.checkForDeferredDeeplink or tune.setDeeplinkListener should be changed to tune.registerDeeplinkListener as of 4.8.0. checkForDeferredDeeplink and setDeeplinkListener are deprecated and will be removed in the TUNE Android SDK 5.0.
Thank you for upgrading! You are now ready to use the TUNE Android SDK v4.8.0 and above! | https://tune.docs.branch.io/sdk/migrating-to-android-4-8-0-and-above/ | 2019-11-12T03:07:26 | CC-MAIN-2019-47 | 1573496664567.4 | [] | tune.docs.branch.io |
App Store Camera Usage Description
When submitting your app to the app stores, you are required to have a description of how you use the camera and photo library.
There is already a default usage message that says: "This app allows you to upload photos."
If you need to change that for any reason, you can update the following field:
| https://docs.apppresser.com/article/451-app-store-camera-usage-description | 2019-11-12T04:27:56 | CC-MAIN-2019-47 | 1573496664567.4 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/543577d6e4b01359b0ede64c/images/5c05700a04286304a71cee89/file-bMCiEPQV2w.png',
None], dtype=object) ] | docs.apppresser.com |
Connect(); Special Issue 2018
Volume 33 Number 13
Scott Hunter
.
Read the articles in the December 2018 issue, including content on Azure Kubernetes Services, Azure IoT Central, Blazor Custom Components, and more.
Mads Kristensen
Visual Studio 2019 introduces exciting improvements and new features aimed at optimizing developer productivity and team collaboration. Whether you’re using Visual Studio for the first time or have been using it for years, you’ll benefit from features that improve all aspects of the development lifecycle.
Julie Lerman.
James McCaffrey.
Krishna Anumalasetty
With automated Machine Learning, Microsoft is working towards its quest of making AI more accessible for every developer and data scientist. This article explores building an energy demand forecasting solution using automated ML.
Micheal Learned.
Kevin Farlee
Learn about Azure SQL Database Hyperscale, a revolutionary new architecture that has the unique benefit of providing full compatibility with previous generations of SQL engines.
Michael Crump
Be more productive with Azure App Services with these handy tips that solve common developer scenarios and can shave hours off your coding tasks.
David Ortinau
The new Xamarin.Forms Shell introduced at the Connect(); conference in December acts as a container for your application. Shell takes care of the basic UI features every application needs so that you can focus on the core work of your application.
Michael Desmond
The Connect(); conference is part of a larger conversation between Microsoft and its developer community. This special issue of MSDN Magazine explores the vision Microsoft has articulated and how it will impact developers. | https://docs.microsoft.com/en-us/archive/msdn-magazine/2018/connect/connect-;-2018 | 2019-11-12T04:24:18 | CC-MAIN-2019-47 | 1573496664567.4 | [] | docs.microsoft.com |
need:
- An x86- or x64-based computer running Windows XP, Windows Vista, or a Windows Server 2003 operating system.
- Windows OPK or Windows AIK.
- A CD or DVD burner to create portable media.
- Image-burning software.
- .msi package. The setup program will automatically install tools and documentation on your local computer under C:\Program Files\Windows AIK\.
Default Installation Directories
By default, the | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-vista/cc766148%28v%3Dws.10%29 | 2019-11-12T02:47:56 | CC-MAIN-2019-47 | 1573496664567.4 | [] | docs.microsoft.com |
Searching documents from a custom search index
There are three ways you can search for a Solr document in Martini:
using one-liners, using a
SolrClient object, or using the Solr Search API. the one-liner is the easiest way to update an existing document.
One-liners from
SolrMethods in particular can help. For example:.
Using the Solr Search API
Alternatively, you can use the extended Solr Search API to query documents. This will involve the use of Solr's SearchHandlers via the Solr-derived endpoint. For example: | https://docs.torocloud.com/martini/latest/custom-search-index/searching/ | 2019-11-12T04:27:14 | CC-MAIN-2019-47 | 1573496664567.4 | [] | docs.torocloud.com |
If you like the work we are doing with InaSAFE and would like to contribute, we will welcome your involvement. We spend a lot of time and effort trying to make a robust, user-friendly and useful software platform, but there is no substitute for having interested users participating and sharing their needs, problems and successes! Here are a few simple ways you can get involved:
The web site will always contain the latest information on how to use InaSAFE. We encourage anyone who wants to get involved with the project to first read the content available on the site to familiarise themselves with the content. The website is available at:
If you need help in solving problems with InaSAFE take a look at Getting Help.
As a User/Trainer/Developer of InaSAFE you are encouraged to add yourself to the Usermap page available at:
Use the Add me to map! button on the left side and point the cross to the place where you live. Fill in your user data and click on Done!. Here you may also download the coordinates of all entered InaSAFE users as a CSV file.
We maintain an issue tracker here:
On this page you can browse and search existing issues and create new issues. The issue tracker is a great place to let us know about specific bugs you encounter or tell us about new features you would like to see in the software. Information about how to correctly file an issue is available in the Submit an Issue section.
Internet Relay Chat (IRC) is a chat room environment where you can talk (by typing messages) to other InaSAFE users and developers to discuss ideas and get help. You can use your own IRC client and join #inasafe on the irc.freenode.net network. We also have a direct link on. Click on Chat live! at the top of the page and join us in the channel. Alternatively, you can use your web browser to join the chat room using the link below:
On the form that appears, choose a user name, enter #inasafe in the Channels: box and complete the rest of the details in the form. After logging in wait a few moments and you will be taken to the #inasafe channel.
Note
Other people in the room may not be actively watching the channel, so just ask your question, leave the chat window open and check back every now and then until you see other chat room members become active.
We must emphasise that InaSAFE is free and open source. That means anyone (or any organisation) can freely modify, adapt and improve the software. We welcome any contributions to InaSAFE.
The easiest way to do this is to fork the InaSAFE code base on GitHub and then send us a pull request.
We also welcome small improvements, translations or other fixes via the issue management system mentioned above.
Note
We have strict requirements that all code submitted to InaSAFE is compliant with high coding_standards and is continually tested by a comprehensive regression testing system. We have this requirement in place to ensure a good experience for our users and to ensure that users can have confidence in the results produced by InaSAFE. | http://docs.inasafe.org/en/user-docs/getting_involved.html | 2019-11-12T02:59:30 | CC-MAIN-2019-47 | 1573496664567.4 | [] | docs.inasafe.org |
.
Dj..
If you are upgrading from South, see our Upgrading from South documentation, and third-party app authors should read the South 1.0 release notes for details on how to support South and Django migrations simultaneously.:
ready()method of their configuration.
models.py. You don’t have to set
app_labelexplicitly any more.
models.pyentirely if an application doesn’t have any models.
labelattribute of application configurations, to work around label conflicts.
verbose_nameof application configurations.
autodiscover()when Django starts. You can consequently remove this line from your URLconf..
QuerySetmethods:
QuerySetand its custom methods were lost after the first call to
values()or
values_list().
Managerwas still necessary to return the custom
QuerySetclass and all methods that were desired on the
Managerhad()
It is now possible to write custom lookups and transforms for the ORM.
Custom lookups work just like Django’s built-in.
Formerror handling¶
Form.add_error()¶
Previously there were two main patterns for handling errors in forms:
ValidationErrorfrom within certain functions (e.g.
Field.clean(),
Form.clean_<fieldname>(), or
Form.clean()for non-field errors.)
Form._errorswhen.
django.contrib.admin¶
site_header,
site_title, and
index_titleattributes on a custom
AdminSitein order to easily change the admin site’s page title and header text. No more needing to override templates!
django.contrib.adminnow use the
border-radiusCSS property for rounded corners rather than GIF background images.
app-<app_name>and
model-<model_name>classes in their
<body>tag to allow customizing the CSS per app or per model.
field-<field_name>class in the HTML to enable style customizations.
django.contrib.admin.ModelAdmin.get_search_fields()method.
ModelAdmin.get_fields()method may be overridden to customize the value of
ModelAdmin.fields.
admin.site.registersyntax, you can use the new
register()decorator to register a
ModelAdmin.
ModelAdmin.list_display_links
= Noneto disable links on the change list page grid.
ModelAdmin.view_on_siteto control whether or not to display the “View on site” link.
ModelAdmin.list_displayvalue by prefixing the
admin_order_fieldvalue with a hyphen.
ModelAdmin.get_changeform_initial_data()method may be overridden to define custom behavior for setting initial change form data.
django.contrib.auth¶
**kwargspassed to
send_mail()call.
permission_required()decorator can take a list of permissions as well as a single permission.
AuthenticationForm.confirm_login_allowed()method to more easily customize the login policy.
django.contrib.auth.views.password_reset()takes an optional
html_email_template_nameparameter used to send a multipart HTML email for password resets.
AbstractBaseUser.get_session_auth_hash()method was added and if your
AUTH_USER_MODELinherits from
AbstractBaseUser, changing a user’s password now invalidates old sessions if the
SessionAuthenticationMiddlewareis enabled. See Session invalidation on password change for more details including upgrade considerations when enabling this new middleware.
django.contrib.formtools¶
WizardView.done()now include a
form_dictto allow easier access to forms by their step name.
django.contrib.gis¶
crosses,
disjoint,
overlaps,
touchesand
withinpredicates, if GEOS 3.3 or later is installed.
django.contrib.messages¶
django.contrib.messagesthat use cookies, will now follow the
SESSION_COOKIE_SECUREand
SESSION_COOKIE_HTTPONLYsettings.
"django.contrib.sessions.backends.cached_db"session backend now respects
SESSION_CACHE_ALIAS. In previous versions, it always used the default cache.
django.contrib.sitemaps¶
lastmodto set a
Last-Modifiedheader in the response. This makes it possible for the
ConditionalGetMiddlewareto handle conditional
GETrequests for sitemaps which set
lastmod.
django.contrib.sites¶
django.contrib.sites.middleware.CurrentSiteMiddlewareallows.
CACHESis now available via
django.core.cache.caches. This dict-like object provides a different instance per thread. It supersedes
django.core.cache.get_cache()which is now deprecated.
django.core.cache.cachesnow yields different instances per thread.
TIMEOUTargument of the
CACHESsetting as
Nonewill set the cache keys as “non-expiring” by default. Previously, it was only possible to pass
timeout=Noneto the cache backend’s
set()method.
CSRF_COOKIE_AGEsetting facilitates the use of session-based CSRF cookies.
send_mail()now accepts an
html_messageparameter for sending a multipart
text/plainand
text/htmlemail.
timeoutparameter.
UploadedFile.content_type_extraattribute contains extra parameters passed to the
content-typeheader on a file upload.
FILE_UPLOAD_DIRECTORY_PERMISSIONSsetting controls the file system permissions of directories created during file upload, like
FILE_UPLOAD_PERMISSIONSdoes for the files themselves.
FileField.upload_toattribute is now optional. If it is omitted or given
Noneor an empty string, a subdirectory won’t be used for storing the uploaded files.
file.
.
clean()method on a form no longer needs to return
self.cleaned_data. If it does return a changed dictionary then that will still be used.
TypedChoiceField
coercemethod return an arbitrary value.
SelectDateWidget.monthscan be used to customize the wording of the months displayed in the select widget.
min_numand
validate_minparameters were added to
formset_factory()to allow validating a minimum number of submitted forms.
Formand
ModelFormhave been reworked to support more inheritance scenarios. The previous limitation that prevented inheriting from both
Formand
ModelFormsimultaneously have been removed as long as
ModelFormappears first in the MRO.
Formwhen subclassing by setting the name to
None..
django.middleware.locale.LocaleMiddleware.response_redirect_classattribute allows you to customize the redirects issued by the middleware..
blocktranstag now supports a
trimmedoption..
makemessagesfrom the root directory of your project, any extracted strings will now be automatically distributed to the proper app or project message file. See Localization: how to create language files for details.
makemessagescommand now always adds the
--previouscommand line flag to the
msgmergecommand, keeping previously translated strings in po files for fuzzy strings.
LANGUAGE_COOKIE_AGE,
LANGUAGE_COOKIE_DOMAINand
LANGUAGE_COOKIE_PATH.
The new
--no-color option for
django-admin disables the
colorization of management command output.
The new
dumpdata --natural-foreign and
dumpdata
--natural-primary options,:
compilemessages.
favicon.icoth.
QuerySet.update_or_create()method was added.
default_permissionsmodel
Metaoption allows you to customize (or disable) creation of the default add, change, and delete permissions.
OneToOneFieldfor Multi-table inheritance are now discovered in abstract classes.
OneToOneFieldby setting its
related_nameto
'+'or ending it with
'+'.
F expressionssupport the power operator (
**).
remove()and
clear()methods of the related managers created by
ForeignKeyand
GenericForeignKeynow accept the
bulkkeyword argument to control whether or not to perform operations in bulk (i.e. using
QuerySet.update()). Defaults to
True.
Noneas a query value for the
iexactlookup.
limit_choices_towhen defining a
ForeignKeyor
ManyToManyField.
only()and
defer()on the result of
QuerySet.values()now raises an error (before that, it would either result in a database error or incorrect data).
index_together(rather than a list of lists) when specifying a single set of fields.
ManyToManyField.through_fieldsargument.
internal_type. Previously model field validation didn’t prevent values out of their associated column data type range from being saved resulting in an integrity error.
order_by()a relation
_idfield by using its attribute name.
enterargument was added to the
setting_changedsignal.
strof the
'app_label.ModelName'form – just like related fields – to lazily reference their senders.
Context.push()method now returns a context manager which automatically calls
pop()upon exiting the
withstatement. Additionally,
push()now accepts parameters that are passed to the
dictconstructor used to build the new context level.
Context.flatten()method returns a
Context‘s stack as one flat dictionary.
Contextobjects can now be compared for equality (internally, this uses
Context.flatten()so the internal structure of each
Context‘s stack doesn’t matter as long as their flattened version is identical).
widthratiotemplate tag now accepts an
"as"parameter to capture the result in a variable.
includetemplate tag will now also accept anything with a
render()method (such as a
Template) as an argument. String arguments will be looked up using
get_template()as always.
includetemplates recursively.
TEMPLATE_DEBUGis
True. This allows template origins to be inspected and logged outside of the
django.templateinfrastructure.
TypeErrorexceptions are no longer silenced when raised during the rendering of a template.
dirsparameter which is a list or tuple to override
TEMPLATE_DIRS:
django.template.loader.get_template()
django.template.loader.select_template()
django.shortcuts.render()
django.shortcuts.render_to_response()
timefilter now accepts timezone-related format specifiers
'e',
'O',
'T'and
'Z'and is able to digest time-zone-aware
datetimeinstances performing the expected rendering.
cachetag will now try to use the cache called “template_fragments” if it exists and fall back to using the default cache otherwise. It also now accepts an optional
usingkeyword argument to control which cache it uses.
truncatechars_htmlfilter truncates a string to be no longer than the specified number of characters, taking HTML into account.
HttpRequest.schemeattribute specifies the scheme of the request (
httpor
httpsnormally).
redirect()now supports relative URLs.
JsonResponsesubclass of
HttpResponsehelps easily create JSON-encoded responses.
DiscoverRunnerhas two new attributes,
test_suiteand
test_runner, which facilitate overriding the way tests are collected and run.
fetch_redirect_responseargument was added to
assertRedirects(). Since the test client can’t fetch externals URLs, this allows you to use
assertRedirectswith redirects that aren’t part of your Django app.
assertRedirects().
secureargument was added to all the request methods of
Client. If
True, the request will be made through HTTPS.
assertNumQueries()now prints out the list of executed queries if the assertion fails.
WSGIRequestinstance generated by the test handler is now attached to the
django.test.Response.wsgi_requestattribute.
TEST.
strip_tags()accuracy (but it still cannot guarantee an HTML-safe result, as stated in the documentation)...
Django 1.7 loads application configurations and models as soon as it starts. While this behavior is more straightforward and is believed to be more robust, regressions cannot be ruled out. See Troubleshooting for solutions to some problems you may encounter.:
INSTALLED_APPSor have an explicit
app_label.
Django will enforce these requirements as of version 1.9, after a deprecation period.
Subclasses of
AppCommand must now implement a
handle_app_config() method instead of
handle_app(). This method receives an
AppConfig
instance instead of a models module.raises
LookupErrorinstead of returning
Nonewhen no model is found.
only_installedargument of
get_modeland
get_modelsno longer exists, nor does the
seed_cacheargument of
get_model.constructor and internal storage¶
The behavior of the
ValidationError constructor has changed when it
receives a container of errors as an argument (e.g. a
list or an
ErrorList):
ValidationErrorbefore adding them to its internal storage.
ValidationErrorinstance)
LocMemCacheregarding.
Noneto.
RuntimeWarning rather than raising
CommandError.:
dumpdatato save your data.
migrateto create the updated schema.
loaddatato import the fixtures you exported in (1).
SessionAuthenticationMiddleware to
the default project template (pre-1.7.2 only), a database must be created
before accessing a page using
runserver.
The addition of the
schemes argument to
URLValidator will appear
as a backwards-incompatible change if you were previously using a custom
regular expression to validate schemes. Any scheme not listed in
schemes
will fail validation, even if the regular expression matches the given URL..attributeand
GenericRelationnow live in
fields.
BaseGenericInlineFormSetand
generic_inlineformset_factory()now live in
forms.
GenericInlineModelAdmin,
GenericStackedInlineand
GenericTabularInlinenowmodulesmethodmethod¶
The
BaseMemcachedCache._get_memcache_timeout() method has been renamed to
get_backend_timeout(). Despite being a private API, it will go through the
normal deprecation.
The
--natural and
-n options for
dumpdata have been
deprecated. Use
dumpdata --natural-foreign instead.
Similarly, the
use_natural_keys argument for
serializers.serialize()
has been deprecated. Use
use_natural_foreign_keys instead.
GETargumentsclass¶
MergeDict exists primarily to support merging
GET
arguments into a
REQUEST property on
WSGIRequest. To merge
dictionaries, use
dict.update() instead. The class
MergeDict is
deprecated and will be removed in Django 1.9.
zh-cn,
zh-twizefunction.
Google has retired support for the Geo Sitemaps format. Hence Django support for Geo Sitemaps is deprecated and will be removed in Django 1.8.setting¶
The
ADMIN_FOR feature, part of the admindocs, has been removed. You can
remove the setting from your configuration at your convenience.
SplitDateTimeWidgetwithvalidutils via the
runfcgi management command will be removed in
Django 1.9. Please deploy your project using WSGI. APIs
django.db.models.sql.where.WhereNode.make_atom() and
django.db.models.sql.where.Constraint are deprecated in favor of the new
custom lookups API..
HttpResponse,
SimpleTemplateResponse,
TemplateResponse,
render_to_response(),
index(), and
mimetypeargument
HttpResponseimmediately consumes its content if it’s an iterator.
AUTH_PROFILE_MODULEsetting, and the
get_profile()method on the User model are removed.
cleanupmanagement command is removed.
daily_cleanup.pyscript is removed.
select_related()no longer has a
depthkeyword argument.
get_warnings_state()/
restore_warnings_state()functions from
django.test.utilsand the
save_warnings_state()/
restore_warnings_state()django.test.*TestCase are removed.
check_for_test_cookiemethod in
AuthenticationFormis removed.
django.contrib.auth.views.password_reset_confirm()that supports base36 encoded user IDs (
django.contrib.auth.views.password_reset_confirm_uidb36) is removed.
django.utils.encoding.StrAndUnicodemix-in is removed. | https://django.readthedocs.io/en/1.9.x/releases/1.7.html | 2019-11-12T03:31:17 | CC-MAIN-2019-47 | 1573496664567.4 | [] | django.readthedocs.io |
How BMC Remedy Email Engine works
This section contains information about:
This topic presents a sample scenario that demonstrates how Email Engine interacts with the BMC Remedy AR System and your mail server. The following figure presents a sample environment for an Email Engine implementation, including the flow of activity.
How Email Engine interacts with the AR System server
(Click the image to expand it.)
In the XYZ Company, Shelly needs a list of the latest issues stored in the Help Desk (HD) Incident form. She wants the results of this query to be returned in an easy-to-read email. Also, Shelly wants to make sure that her coworkers, Katie and Mark, will be copied with the results of this query. All of the steps that Email Engine and the users must take to make this happen follow.
- The local administrator installs Email Engine, configuring Incoming and Outgoing mailboxes to work with the company mail server.
After Email Engine is started, it contacts the AR System server. It then reads all the entries in the AR System Email Mailbox Configuration form and creates Incoming and Outgoing mailboxes based on that information.
- After the administrator notifies the user base that Email Engine is running, Shelly composes an email instructing the Email Engine to perform a query of the HD Incident form. She uses specifically formatted instructions to be read and understood by the Email Engine. She sends this message to an email account on the company mail server that Email Engine polls for incoming.
- After waiting for a prescribed polling period, Email Engine logs in to the company mail server by using the email account information gathered during step 1. Because the mailbox information tells the Email Engine that this email account is to be treated as an Incoming Mailbox, the Email Engine reads the most recent emails from this account, by using one of several email protocols (POP3, IMAP4, MBOX, or MAPI), including the email that Shelly sent.
- Email Engine interprets the instructions and translates them into API calls to the AR System server, attempting to fulfill her query request.
- The AR System server responds to Email Engine API calls with the appropriate query information for the HD Incident form.
- Email Engine uses the formatting instructions in the Outgoing Mailbox to construct an email message to the company mail server. Email Engine then transmits the message with instructions to send the message to Shelly, Mark, and Katie, by using the outgoing email protocol (SMTP or MAPI).
- Shelly, Mark, and Katie log in to the mail server, and they find the email constructed by the Email Engine, which contains a neatly formatted list of the most recent requests.
This example illustrates the relationship between the Email Engine and other systems in a simplified environment. Your environment might differ from the one presented here. For example, the Email Engine might reside on the same system as the AR System server. Alternatively, you might configure the Incoming Mailbox and Outgoing Mailbox to use the same email account on your mail server, and so on. Many of the configuration options available are explained in the upcoming sections. Also, as you proceed through this section, you will learn about the other Email Engine features for processing email. | https://docs.bmc.com/docs/ars81/how-bmc-remedy-email-engine-works-225970198.html | 2019-11-12T05:00:01 | CC-MAIN-2019-47 | 1573496664567.4 | [] | docs.bmc.com |
This page contains information about the V-Ray Marble Simple Texture in V-Ray for Modo.
Page Contents
Overview
The V-Ray Marble Simple Texture generates a procedural marble pattern with two colors or texture maps and two other parameters. For a marble texture with finer control, use the V-Ray Marble Texture.
UI Path: ||Shading viewport|| > Shader Tree > Add Layer button > V-Ray Textures Procedural > V-Ray Marble Simple
Parameters
Color 1 – Controls the color of the filler, this is usually the main color of the marble.
Color 2 – Controls the color of the marbles veins.
Size – Controls the scale of the procedural texture. For more details, please see the Size example below.
Vein Width – Controls how wide the veins in the marble will be. For more details, please see the Vein Width example below.
Example: Size
Size: 0.5
Size: 1.0
Size: 5.0
Example: Vein Width
Vein Width: 0.01
Vein Width: 0.02
Vein Width: 0.05 | https://docs.chaosgroup.com/display/VRAYMODO/V-Ray+Marble+Simple+Texture | 2019-11-12T04:16:45 | CC-MAIN-2019-47 | 1573496664567.4 | [] | docs.chaosgroup.com |
Этот документ доступен на русском языке.
Useful resources:
This article describes the basic chatbot publishing procedure on Chatbotlab platform, as well as the process of connecting a chatbot to various channels via Azure portal (Bot Framework), using an example of Telegram and Facebook Messenger.
- Microsoft account.
- A registered bank card with a balance of at least $1 USD. (This amount will be blocked on your account by Azure service for some time only for verification of the card).
Channels for the publication of a chatbot on Azure portal are provided free, using the F0 plan. More detailed information about plans you can find here .
If you have any difficulties with registration, please write to our technical support.
Available publishing channels work through the Azure (Bot Framework) service. To work with the service you will need a Microsoft account. You can create it on the registration page.
To use Azure portal services, you need to add a subscription to your account. To do this, log into the Microsoft Azure page using your login and password, and then go to Subscriptions section in the upper left corner of page, and click on add subscription.
On the next page you will be asked to choose one of the subscriptions. We recommend you to choose, first of all, the Pay-As-You-Go option of subscription, because it is free and involves payment only for additional services. To activate a subscription, you will need to fill out all the forms with data that system will require. After activating a subscription, you can start working with Azure portal by clicking on Portal in upper right corner of the Microsoft Azure page.
In order to register a chatbot on Azure portal, you need to use the Bot Channels Registration service.
Click on the New button, which is on the left at Azure portal panel. In left part of the window that appears, select AI + Cognitive Services line, and then click on See all opposite the name of list - Featured - in the right part of window to expand the full list of services. Find Bot Service section and click on Bot Channels Registration.
After that, appears a window with information about service. At the bottom of window, click the Create button.
In opened Bot Service window, you need to provide required information about a chatbot:
- Enter chatbot’s Bot name: "ACService". Name must be between 4 and 36 characters.
- Next, select your subscription in Subscription field. In our case, this is Pay-As-You-Go.
- The Resource groups field is filled in automatically with chatbot name: "ACService". You can change a group name or select an existing one.
Resource groups - container with related resources for the Azure solution.
- Select a geographic region closest to chatbot creation in Location field.
- Next, you must select Pricing tier. We choose recommended and free price category - F0.
- Then fill in the Messaging endpoint field. The URL address of Messaging endpoint can be found in a chatbot profile on our website.
Application Insights - free service for monitoring and analyzing chatbot work.
- After providing all necessary information about chatbot, click the Create button at the bottom of Bot Service window.
After a while, in the upper right corner of portal you will see a notification about successful deployment of the chatbot - Deployment succeeded.
During a publication of chatbot on the ChatbotLab platform, you will need the Microsoft App ID and Microsoft App password credentials, which will need to be copied and pasted into the Application ID and Application password fields, respectively, in BotFramework.com channel settings.
In order to obtain these credentials:
- Click on Resource groups in the left panel of the Azure portal, select needed "ACService" resource in the resource list, and then click on the similar name of our chatbot.
- In the left panel of Bot Channels Registration service window, in Bot management section select the Settings menu. In Bot profile, we find Microsoft App ID line in Configuration section. Copy and save the Application ID shown below it.
- Then click on Manage link near Microsoft App ID and on the opened page, in Application Secrets section, click the Generate New Password button. Copy and save the Application password in a safe place.
If you have an error when generating a new password, you can find out the existing password as follows:
- Go into Resource groups, select the required resource and click on Deployments line in the left panel of resource window in Settings section.
- Then in the list, find a name of desired chatbot and click on it.
- In the next window you can find APPSECRET line in Inputs section, which is the Application password.
Now you can use these credentials to publish a chatbot on the ChatbotLab platform.
Go to My chatbots from Chatbot tab at the top of website and find chatbot in the list. Click on the More button and select Profile to go to the chatbot profile.
In the chatbot profile, Common section contains the URL address of Messaging endpoint. It must be copied and pasted into appropriate field during a registration of chatbot on the Azure portal.
In Channels section, click on Add to add an available BotFramework.com channel to the list of connected channels. After that, click on Edit to get into channel configurator.
At configurator of the BotFramework.com channel, you must enter the credentials - Application ID and *Application password* - of your chatbot, copied on the Azure portal.
Next, in Channels section of the chatbot profile, switch status Published of the channel BotFramework.com to On state.
Then click on Publish to publish our chatbot.
Received notifications that the channel and chatbot were published. Now chatbot can be used by others.
In Common section of your chatbot's profile, you can always check its publication status by looking at Current status.
If you no longer want others to have access to your chatbot, you can unpublish chatbot in its profile. Just click on Unpublish.
Before you connect your chatbot to various Azure portal channels, you can test its work with integrated web-chat.
To do this, go to our chatbot on the Azure portal: 'Resource groups > ACService resource > ACService chatbot'. Then, on the left panel of chatbot window select Test in Web Chat menu in Bot management section. Next, will be opened a window with a web-chat where you can check whether the publication on the ChatbotLab platform is correct and communicate with your chatbot.
Connecting Telegram channel:
Following the simple steps, we create “ACService_Bot” chatbot in Telegram messenger using the BotFather. After that we get a token. Copy it.
Open our chatbot on the Azure portal: 'Resource groups > ACService resource > ACService chatbot'. Then in the left panel of chatbot window select Channels menu in Bot management section. Select Telegram in More channels section. In channel settings, insert the token received from Telegram messenger in Access token field. Click Save.
Telegram channel is connected!
First of all we need to create new Facebook App. Enter the name of our App: "ACService" and click on Create App ID.
On Product Setup page of the app find Messenger line and activate Facebook Messenger in application by clicking on Get Started .
Then, need to set up Webhooks for the Messenger:
On the Azure portal, go to channel menu of our chatbot: 'Resource group > ACService resource > ACService chatbot> Channels (in Bot management section on the left panel of chatbot window)'. Select Facebook Messenger in More channels section, and in channel settings, copy values in the Callback URL and Verify Token fields.
Paste these values into the appropriate fields of New Page Subscription in Facebook App. And select in Subscription Fields the following items: messages, message_deliveries, messaging_postbacks, and messaging_optins. Click on Verify and Save.
Next, generate Page Access Token by selecting target page from the list. Copy received token.
Fill in the fields of Facebook Messenger credentials on the configuration page of Facebook Messenger channel.
- Paste copied Page Access Token in the appropriate field.
- Facebook App ID and Facebook App Secret you can find on Dashboard tab on the Facebook App page.
- Facebook Page ID can be found in More info section of “About” of the target Facebook Page.
After all required fields are filled in, click on Save.
Facebook Messenger channel is connected!
Let's test the work of connected chatbot channels in Telegram and Facebook Messenger.
Telegram:
- Go to the chatbot’s Channels menu on the Azure portal: 'Resource group > ACService resource > ACService chatbot> Channels (in Bot management section on the left panel of chatbot window)'. In Connect to channels section click on name of the connected channel - Telegram - and go to our chatbot “ACService_Bot” in Telegram messenger.
- Click on Start, welcome the chatbot and try to make a request for a service.
Telegram channel works!
Facebook Messenger:
- In chatbot’s Channels menu on the Azure portal, click on name of the connected Facebook Messenger channel in Connect to channels section and go to the page of conversation with our chatbot “Air Conditioning Service” in Messenger.
- Click on Start, welcome the chatbot and try to make a request for a service, answering chatbot questions.
Facebook Messenger channel works!
In this way, you can run the same chatbot once created on our website simultaneously on several platforms. | https://docs.chatbotlab.io/docs/publishing-a-chatbot | 2019-11-12T03:14:23 | CC-MAIN-2019-47 | 1573496664567.4 | [] | docs.chatbotlab.io |
Single Sign-On (SSO)¶
Single Sign-On (or SSO) allows you to manage your organization’s entire membership via a third party provider.
Preface¶
Before you get around to actually turning on SSO, you’ll want to keep in mind that once it’s activated, all existing users will need to link their account before they are able to continue using Sentry. Because of that we recommend coordinating with your team during off-peak hours. That said, it’s super quick to link accounts, so we don’t consider it a true hurdle.
Note
SSO is not available on certain grandfathered plans.
Getting Started¶
With that out of the way, head on over to your organization home. You’ll see an “Auth” link in the sidebar. Start by hitting that, and then continue to the “Configure” link next to provider you wish to configure.
Additionally we’ll automatically send each pre-existing member an email with instructions on linking their account. This will happen automatically once SSO is successfully configured. Even if they dont click the link, the next time they try to hit any page within the organization we’ll require them to link their account (with the same auth flow you just went through).
Default Membership¶
Every member who creates a new account via SSO will be given global organization access with a member role. This means that they can access events from any team, but they won’t be able to create new projects or administer current ones.
Security¶
Our SSO implementation prioritizes security. We aggressively monitor linked accounts and will disable them within any reasonable sign that the account’s access may have been revoked. Generally this will be transparent to you, but if the provider is functioning in an unexpected way you may experience more frequent re-authorization requests.
Providers¶
Google Business App¶
Enabling the Google integration will ask you to authenticate against a Google
Apps account. Once done, membership will be restricted to only members of the
given Apps domain (i.e.
sentry.io).
GitHub Organizations¶
The GitHub integration will authenticate against all organizations, and once complete prompt you for the organization which you wish to restrict access by.
Currently GitHub Enterprise is not supported. If your company needs support for GE, let us know.
SAML2 Identity Provider¶
Sentry provides SAML2 based authentication which may be configured manually using the generic SAML2 provider, or a specific provider which provides defaults specific to that identity provider.
Sentry’s SAML endpints are as follows, where the
{organization_slug} is
substituted for your organization slug:
Note
SAML2 SSO requires an Enterprise Plan.
OneLogin¶
In your OneLogin dashboard locate the Sentry app in the app catalog and add it to your organization.
As part of OneLogin SSO configuration, you must to provide the OneLogin identity provider issuer URL to Sentry. This URL is specific to your OneLogin account and can be found under the ‘SSO’ tab on the Sentry OneLogin application configuration page.
You may refer to the OneLogin documentation for more detailed setup instructions.
Okta¶
In your Okta admin dashboard locate the Sentry app in the Okta Application Network and add it to your organization.
As part of the Okta SSO configuration, you must provide the Okta Identity Provider metadata to Sentry. This URL can be located under the Sign-On Methods SAML2 settings panel, look for the ‘Identity Provider metadata’ link which can may right click and copy link address.
You may refer to the Okta documentation for more detailed setup instructions.
Auth0¶
In your Auth0 dashboard locate the Sentry app under the SSO Integrations page and add it to your organization.
As part of the Auth0 SSO configuration, you must provide the Auth0 Identity Provider metadata to Sentry. This URL is available under the Tutorial tab of the Sentry SSO integration.
Rippling¶
In your Rippling admin dashboard locate the Sentry app in the list of suggested apps and select it.
When prompted with the Rippling Metadata URL, copy this into the Sentry Rippling provider configuration. You will have to complete the Rippling application configuration before completing the sentry provider configuration. | https://docs.sentry.io/learn/sso/ | 2017-12-11T07:35:34 | CC-MAIN-2017-51 | 1512948512584.10 | [] | docs.sentry.io |
Welcome to the Menpo documentation!'): image.crop_to_landmarks_inplace() images.append(image)
Where
import_images yields a generator to keep memory usage low.
Although the above is a very simple example, we believe that being able to easily manipulate and couple landmarks with images and meshes, is an important problem for building powerful models in areas such as facial point localisation.
To get started, check out the User Guide for instructions on installation and some of the core concepts within Menpo. | http://docs.menpo.org/en/v0.4.4/ | 2017-12-11T07:25:36 | CC-MAIN-2017-51 | 1512948512584.10 | [] | docs.menpo.org |
Deprecated CLR Hosting Functions
This section describes the unmanaged global static functions that earlier versions of the hosting API used.
With the exception of the infrastructure functions (
_Cor* functions), which are used only by the .NET Framework, these functions have been deprecated in the .NET Framework 4.
Activation functions
ClrCreateManagedInstance Function
Deprecated. Creates an instance of the specified managed type.
CoInitializeCor Function
Obsolete. To initialize the common language runtime (CLR), use either CorBindToRuntimeEx or CorBindToCurrentRuntime.
CoInitializeEE Function
Deprecated. Ensures that the CLR execution engine is loaded into a process. Use the ICLRRuntimeHost::Start method instead.
CorBindToCurrentRuntime Function
Deprecated. Loads the common language runtime (CLR) into a process by using version information stored in an XML file.
CorBindToRuntime Function
Deprecated. Enables unmanaged hosts to load the CLR into a process.
CorBindToRuntimeByCfg Function
Deprecated. Loads the CLR into a process by using version information that is read from an XML file.
CorBindToRuntimeEx Function
Deprecated. Enables unmanaged hosts to load the CLR into a process, and allows you to set flags to specify the behavior of the CLR.
CorBindToRuntimeHost Function
Deprecated. Enables hosts to load a specified version of the CLR into a process.
GetCORRequiredVersion Function
Deprecated. Gets the required CLR version number.
GetCORSystemDirectory Function
Deprecated. Returns the installation directory of the CLR that is loaded into the process.
GetRealProcAddress Function
Deprecated. Gets the address of the specified function that is exported from the latest installed version of the CLR.
GetRequestedRuntimeInfo Function
Deprecated. Gets version and directory information about the CLR requested by an application.
CLR version functions
The functions in this section return a CLR version; they do not activate the CLR.
GetCORVersion Function
Deprecated. Returns the version number of the CLR that is running in the current process.
GetFileVersion Function
Deprecated. Gets the CLR version information of the specified file, using the specified buffer.
GetRequestedRuntimeVersion Function
Deprecated. Gets the version number of the CLR requested by the specified application. If that version is not installed, gets the most recent version that is installed before the requested version.
GetRequestedRuntimeVersionForCLSID Function
Deprecated. Gets the appropriate CLR version information for the class with the specified CLSID.
GetVersionFromProcess Function
Deprecated. Gets the version number of the CLR that is associated with the specified process handle.
LockClrVersion Function
Deprecated. Allows the host to determine which version of the CLR will be used within the process before explicitly initializing the CLR.
Hosting functions
CallFunctionShim Function
Deprecated. Makes a call to the function that has the specified name and parameters in the specified library.
CoEEShutDownCOM Function
Deprecated. Unloads a COM assembly from the process.
CorExitProcess Function
Deprecated. Shuts down the current unmanaged process.
CorLaunchApplication Function
Deprecated. Starts the application at the specified network path, using the specified manifests and other application data.
CorMarkThreadInThreadPool Function
Deprecated. Marks the currently executing thread-pool thread for the execution of managed code. Starting with the .NET Framework version 2.0, this function has no effect. It is not required, and can be removed from your code.
CoUninitializeCor Function
Obsolete. The CLR cannot be unloaded from a process.
CoUninitializeEE Function
Obsolete.
CreateDebuggingInterfaceFromVersion Function
Deprecated. Creates an ICorDebug object based on the specified version information.
CreateICeeFileGen Function
Deprecated. Creates an ICeeFileGen object.
DestroyICeeFileGen Function
Deprecated. Destroys an ICeeFileGen object.
FExecuteInAppDomainCallback Function Pointer
Deprecated. Points to a function that the CLR calls to execute managed code.
FLockClrVersionCallback Function Pointer
Deprecated. Points to a function that the CLR calls to notify the host that initialization has either started or completed.
GetCLRIdentityManager Function
Deprecated. Gets a pointer to an interface that allows the CLR to manage identities.
LoadLibraryShim Function
Deprecated. Loads a specified version of a .NET Framework DLL.
LoadStringRC Function
Deprecated. Translates an HRESULT value into an error message by using the default culture of the current thread.
LoadStringRCEx Function
Deprecated. Translates an HRESULT value to an appropriate error message for the specified culture.
LPOVERLAPPED_COMPLETION_ROUTINE Function Pointer
Deprecated. Points to a function that notifies the host when an overlapped (that is, asynchronous) I/O to a device has completed.
LPTHREAD_START_ROUTINE Function Pointer
Deprecated. Points to a function that notifies the host that a thread has started to execute.
RunDll32ShimW Function
Deprecated. Executes the specified command.
WAITORTIMERCALLBACK Function Pointer
Deprecated. Points to a function that notifies the host that a wait handle has either been signaled or timed out.
Infrastructure functions
The functions in this section are for use by the .NET Framework only.
_CorDllMain Function
Initializes the CLR, locates the managed entry point in the DLL assembly's CLR header, and begins execution.
_CorExeMain Function
Initializes the CLR, locates the managed entry point in the executable assembly's CLR header, and begins execution.
_CorExeMain2 Function
Executes the entry point in the specified memory-mapped code. This function is called by the operating system loader.
_CorImageUnloading Function
Notifies the loader when the managed module images are unloaded.
_CorValidateImage Function
Validates managed module images, and notifies the operating system loader after they have been loaded.
See Also
.NET Framework 4 Hosting Global Static Functions | https://docs.microsoft.com/en-us/dotnet/framework/unmanaged-api/hosting/deprecated-clr-hosting-functions | 2017-12-11T08:14:18 | CC-MAIN-2017-51 | 1512948512584.10 | [] | docs.microsoft.com |
:
Callback function for a button created by cv::createButton.
Callback function for mouse events. see cv::setMouseCallback.
Callback function defined to be called every frame. See cv::setOpenGlDrawCallback.
Callback function for Trackbar see cv::createTrackbar.
Mouse Event Flags see cv::MouseCallback.
Mouse Events see cv::MouseCallback.
Qt "button" type.
Qt font style.
Qt font weight.
Flags for cv::namedWindow.
Flags for cv::setWindowProperty / cv::getWindowProperty.
Creates.
[Qt Backend Only] winname can be empty (or NULL) if the trackbar should be attached to the control panel.
Clicking the label of each trackbar enables editing the trackbar values manually.
Destroys all of the HighGUI windows.
The function destroyAllWindows destroys all of the opened HighGUI windows.
Destroys the specified window.
The function destroyWindow destroys the window with the given name.
Gets the mouse-wheel motion delta, when handling mouse-wheel events cv::EVENT_MOUSEWHEEL and cv::EVENT_MOUSEHWHEEL.
For regular mice with a scroll-wheel, delta will be a multiple of 120. The value 120 corresponds to a one notch rotation of the wheel or the threshold for action to be taken and one such action should occur for each delta. Some high-precision mice with higher-resolution freely-rotating wheels may generate smaller values.
For cv::EVENT_MOUSEWHEEL positive and negative values mean forward and backward scrolling, respectively. For cv::EVENT_MOUSEHWHEEL, where available, positive and negative values mean right and left scrolling, respectively.
With the C API, the macro CV_GET_WHEEL_DELTA(flags) can be used alternatively.
Mouse-wheel events are currently supported only on Windows.
Returns the trackbar position.
The function returns the current position of the specified trackbar.
[Qt Backend Only] winname can be empty (or NULL) if the trackbar is attached to the control panel.
Provides parameters of a window.
The function getWindowProperty returns properties of a window. window was created with OpenGL support, cv::imshow also support ogl::Buffer , ogl::Texture2D and cuda::GpuMat as input.
If the window was not created before this function, it is assumed creating a window with cv::WINDOW_AUTOSIZE.
If you need to show an image that is bigger than the screen resolution, you will need to call namedWindow("", WINDOW_NORMAL) before the imshow.
[Windows Backend Only] Pressing Ctrl+C will copy the image to the clipboard.
[Windows Backend Only] Pressing Ctrl+S will show a dialog to save the image.
Moves window to the specified position.
Creates a window.
The function namedWindow creates a window that can be used as a placeholder for images and trackbars. Created windows are referred to by their names.
If a window with the same name already exists, the function does nothing.
You can call cv::destroyWindow or cv::destroyAllWindows to close the window and de-allocate any associated memory usage. For a simple program, you do not really have to call these functions because all the resources and windows of the application are closed automatically by the operating system upon exit.
Qt backend supports additional flags:
Resizes window to the specified size.
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.
Selects ROI on the given image. Function creates a window and allows user to select a ROI using mouse. Controls: use
space or
enter to finish selection, use key
c to cancel selection (function will return the zero cv::Rect).
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.
Selects ROIs on the given image. Function creates a window and allows user to select a ROIs using mouse. Controls: use
space or
enter to finish current selection and start a new one, use
esc to terminate multiple ROI selection process.
Sets mouse handler for the specified window.
Sets the trackbar maximum position.
The function sets the maximum position of the specified trackbar in the specified window.
[Qt Backend Only] winname can be empty (or NULL) if the trackbar is attached to the control panel.
Sets the trackbar minimum position.
The function sets the minimum position of the specified trackbar in the specified window.
[Qt Backend Only] winname can be empty (or NULL) if the trackbar is attached to the control panel.
Sets the trackbar position.
The function sets the position of the specified trackbar in the specified window.
[Qt Backend Only] winname can be empty (or NULL) if the trackbar is attached to the control panel.
Changes parameters of a window dynamically.
The function setWindowProperty enables changing properties of a window.
Updates window title.
Waits for a pressed key.
The function waitKey waits for a key event infinitely (when \(\texttt{delay}\leq.
This function is the only method in HighGUI that can fetch and handle events, so it needs to be called periodically for normal event processing unless HighGUI is used within an environment that takes care of event processing.
The function only works if there is at least one HighGUI window created and the window is active. If there are several HighGUI windows, any of them can be active. | https://docs.opencv.org/trunk/d7/dfc/group__highgui.html | 2017-12-11T07:43:06 | CC-MAIN-2017-51 | 1512948512584.10 | [] | docs.opencv.org |
Migrating from SDK2 to SDK3 API
The 3.0 API breaks the existing 2.0 APIs in order to provide a number of improvements. Collections and Scopes are introduced. The Document class and structure has been completely removed from the API, and the returned value is now
Result. Retry behaviour is more proactive, and lazy bootstrapping moves all error handling to a single place. Individual behaviour changes across services are explained here.
Fundamentals
The Couchbase SDK team takes semantic versioning seriously, which means that API should not be broken in incompatible ways while staying on a certain major release. This has the benefit that most of the time upgrading the SDK should not cause much trouble, even when switching between minor versions (not just bugfix releases). The downside though is that significant improvements to the APIs are very often not possible, save as pure additions — which eventually lead to overloaded methods.
To support new server releases and prepare the SDK for years to come, we have decided to increase the major version of each SDK and as a result take the opportunity to break APIs where we had to. As a result, migration from the previous major version to the new major version will take some time and effort — an effort to be counterbalanced by improvements to coding time, through the simpler API, and performance. The new API is built on years of hands-on experience with the current SDK as well as with a focus on simplicity, correctness, and performance.
Before this guide dives into the language-specific technical component of the migration, it is important to understand the high level changes first. As a migration guide, this document assumes you are familiar with the previous generation of the SDK and does not re-introducing SDK 2.0 concepts. We recommend familiarizing yourself with the new SDK first by reading at least the getting started guide, and browsing through the other chapters a little.
Terminology
The concept of a
Cluster and a
Bucket remain the same, but a fundamental new layer is introduced into the API:
Collections and their
Scopes.
Collections are logical data containers inside a Couchbase bucket that let you group similar data just like a Table does in a relational database — although documents inside a collection do not need to have the same structure.
Scopes allow the grouping of collections into a namespace, which is very usfeul when you have multilpe tenants acessing the same bucket.
Couchbase Server is including support for collections as a developer preview in version 6.5 — in a future release, it is hoped that collections will become a first class concept of the programming model.
To prepare for this, the SDKs include the feature from SDK 3.0.
In the previous SDK generation, particularly with the
KeyValue API, the focus has been on the codified concept of a
Document.
Documents were read and written and had a certain structure, including the
id/
key, content, expiry (
ttl), and so forth.
While the server still operates on the logical concept of documents, we found that this model in practice didn’t work so well for client code in certain edge cases.
As a result we have removed the
Document class/structure completely from the API.
The new API follows a clear scheme: each command takes required arguments explicitly, and an option block for all optional values.
The returned value is always of type
Result.
This avoids method overloading bloat in certain languages, and has the added benefit of making it easy to grasp APIs evenly across services.
As an example here is a KeyValue document fetch:
$getResult = $collection->get("key", (new GetOptionsl())->timeout(3000000));
Compare this to a N1QL query:
$queryResult = $cluster->query("select 1=1", (new QueryOptions())->timeout(3000000));
Since documents also fundamentally handled the serialization aspects of content, two new concepts are introduced: the
Serializer and the
Transcoder.
Out of the box the SDKs ship with a JSON serializer which handles the encoding and decoding of JSON.
You’ll find the serializer exposes the options for methods like N1QL queries and KeyValue subdocument operations,.
The KV API extends the concept of the serializer to the
Transcoder.
Since you can also store non-JSON data inside a document, the
Transcoder allows the writing of binary data as well.
It handles the object/entity encoding and decoding, and if it happens to deal with JSON makes uses of the configured
Serializer internally.
See the Serialization and Transcoding section below for details.
What to look out for
The SDKs are more proactive in retrying with certain errors and in certain situations, within the timeout budget given by the user — as an example, temporary failures or locked documents are now being retried by default — making it even easier to program against certain error cases.
This behavior is customizable in a
RetryStrategy, which can be overridden on a per operation basis for maximum flexibility if you need it.
Note, most of the bootstrap sequence is now lazy (happening behind the scenes). For example, opening a bucket is not raising an error anymore, but it will only show up once you perform an actual operation. The reason behind this is to spare the application developer the work of having to do error handling in more places than needed. A bucket can go down 2ms after you opened it, so you have to handle request failures anyway. By delaying the error into the operation result itself, there is only one place to do the error handling. There will still be situations why you want to check if the resource you are accessing is available before continuing the bootstrap; for this, we have the diagnostics and ping commands at each level which allow you to perform those checks eagerly.
Language Specifics
Now that you are familiar with the general theme of the migration, the next sections dive deep into the specifics. First, installation and configuration are covered, then we talk about exception handling, and then each service (i.e. Key/Value, Query,…) is covered separately.
Installation and Configuration
As with 2.x release, the primary source of artifacts is the release notes page, where we publish links to pre-built binaries, as well as to source tarballs.
SDK 3.x supports PHP interpreters from 7.2: | https://docs.couchbase.com/php-sdk/3.0/project-docs/migrating-sdk-code-to-3.n.html | 2020-09-19T00:01:11 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.couchbase.com |
Troubleshooting AD Replication error 1818: The remote procedure call was cancelled
This article describes an issue where Active Directory Replications fail with error 1818: The remote procedure call was cancelled (RPC_S_CALL_CANCELLED).
Original product version: Windows Server 2019, Windows Server 2016, Windows Server 2012 R2
Original KB number: 2694215
Notice
Home users: This article is only intended for technical support agents and IT professionals. If you're looking for help with a problem, ask the Microsoft Community.
Symptoms
This article describes the symptoms, cause, and resolution steps when Active Directory replication fails with error 1818: The remote procedure call was cancelled (RPC_S_CALL_CANCELLED).
Possible formats for the error include:
The following events get logged
. Repadmin /showreps displays the following error message
DC=Contoso,DC=com
<Sitename>\<DCname> via RPC DC
DC object GUID: b8b5a0e4-92d5-4a88-a601-61971d7033af Last attempt @ 2009-11-25 10:56:55 failed, result 1818 (0x71a): Can't retrieve message string 1818
(0x71a), error 1815. 823 consecutive failure(s). Last success @ (never).
Repadmin /showreps from Domain Controller Name shows that it's failing to pull Domain NC from <SiteName> but can pull all other NCs
===================
<Sitename>\<DC name>
DC Options: IS_GC
Site Options: IS_INTER_SITE_AUTO_TOPOLOGY_DISABLED
DC object GUID: d46a672b-f6be-431e-81c6-23b1f284f8c9
DC invocationID: 5d74f6b0-08f1-408a-b4d5-4759adafe219
==== INBOUND NEIGHBORS =====================================
DC=Contoso,DC=com
<Sitename>\<DCname> via RPC DC
DC object GUID: b8b5a0e4-92d5-4a88-a601-61971d7033af
Last attempt @ 2011-11-11 10:45:49 failed, result 1818 (0x71a):
Can't retrieve message string 1818 (0x71a), error 1815. 123 consecutive
failure(s). Last success @ (never).
DCPromo may fail while promoting a new domain controller and you'll see the following error on the DCPROMO GUI
Active Directory Installation wizard.
The Operation Failed because: Active Directory could not replicate the directory partition CN=Configuration....from the remote domain controller "server name" " The remote procedure call was cancelled "
The following entries will be logged in the DCPROMO logs
06/29/2010 22:31:36 [INFO] EVENTLOG (Informational): NTDS General / Service Control : 1004
Active Directory Domain Services was shut down successfully.
**
06/29/2010 22:31:38 [INFO] NtdsInstall for <FQDN fo the domain> returned 1818
06/29/2010 22:31:38 [INFO] DsRolepInstallDs returned 1818
06/29/2010 22:31:38 [ERROR] Failed to install to Directory Service (1818)
06/29/2010 22:31:38 [ERROR] DsRolepFinishSysVolPropagation (Abort Promote) failed with 8001
06/29/2010 22:31:38 [INFO] Starting service NETLOGON
06/29/2010 22:31:38 [INFO] Configuring service NETLOGON to 2 returned 0
06/29/2010 22:31:38 [INFO] The attempted domain controller operation has completed
06/29/2010 22:31:38 [INFO] DsRolepSetOperationDone returned 0
While trying to rehost a partition on the Global catalog
repadmin /rehost fail with DsReplicaAdd failed with status
1818 (0x71a)>
DsReplicaAdd fails with status 1818 (0x71a)
Cause
The issue occurs when the destination DC performing inbound replication doesn't receive replication changes within the number of seconds specified in the "RPC Replication Timeout" registry key. You might experience this issue most frequently in one of the following situations:
- You promote a new domain controller into the forest by using the Active Directory Installation Wizard (Dcpromo.exe).
- Existing domain controllers replicate from source domain controllers that are connected over slow network links.
The default value for the RPC Replication Timeout (mins) registry setting on Windows 2000-based computers is 45 minutes. The default value for the RPC Replication Timeout (mins) registry setting on Windows Server 2003-based computers is 5 minutes. When you upgrade the operating system from Windows 2000 to Windows Server 2003, the value for the RPC Replication Timeout (mins) registry setting is changed from 45 minutes to 5 minutes. If a destination domain controller that is performing RPC-based replication doesn't receive the requested replication package within the time that the RPC Replication Timeout (mins) registry setting specifies, the destination domain controller ends the RPC connection with the non-responsive source domain controller and logs a Warning event.
Some specific root causes for Active Directory logging 1818 \ 0x71a \ RPC_S_CALL_CANCELLED include:
- An old Network Interface Card driver installed on Domain Controllers could cause failure of a few network features like Scalable Networking Pack (SNP)
- Low bandwidth or network packets drops between source and destination domain controllers.
- The networking device between source and destination device dropping packets. Note: A speed and duplex mismatch between the NIC and switch on a domain controller could cause dropped frames, resets, duplicate acknowledgments, and retransmitted frames.
Resolution
Increase replication time-out adding the key RPC Replication Timeout (mins) on HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters
o Start Registry Editor.
o Locate the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters
o Right-click Parameters, point to New, and then click DWORD Value.
o Type RPC Replication Timeout (mins), and then press ENTER to name the new value.
o Right-click RPC Replication Timeout (mins), and then click Modify.
o In the Value data box, type the number of minutes that you want to use for the RPC timeout for Active Directory replication, and then click OK.
On a Windows Server 2003-based computer that is part of a Windows 2000 environment or that was upgraded from Windows 2000 Server, you may want to set this value to 45 minutes. This is value may depend on your network configuration and should be adjusted accordingly.
Note
You must restart the computer to activate any changes that are made to RPC Replication Timeout (mins)
Update the network adapter drivers
To determine whether an updated network adapter driver is available, contact the network adapter manufacturer or the original equipment manager (OEM) for the computer. The driver must meet Network Driver Interface Specification (NDIS) 5.2 or a later version of this specification.
a. To manually disable RSS and TCP Offload yourself, follow these steps:
· Click Start, click Run, type regedit, and then click OK.
· Locate the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
· Right-click EnableTCPChimney, and then click Modify.
· In the Value data box, type 0, and then click OK.
· Right-click EnableRSS, and then click Modify.
· In the Value data box, type 0, and then click OK.
· Right-click EnableTCPA, and then click Modify.
· In the Value data box, type 0, and then click OK.
· Exit Registry Editor,
Note
You must restart the computer to activate any changes that are made to EnableTCPChimney.
Enable PMTU Black Hole Detection on the Windows-based hosts that will be communicating over a WAN connection. Follow these steps:
o Start Registry Editor (Regedit.exe).
o Locate the following key in the registry: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\tcpip\parameters
o On the Edit menu, click Add Value, and then add the following registry value:
Value Name: EnablePMTUBHDetect Data Type: REG_DWORD Value: 1 Restart the machine
Check the network binding order:
To configure the network binding order:
a. Quit any programs that are running.
b. Right-click Network Neighborhood, and then click Properties.
c. Click the Bindings tab. In the Show Bindings For box, click All Services.
d. Double-click each listed service to expand it.
e. Under each service, double-click each protocol to expand it.
f. Under each protocol, there's a number of network adapter icons. Click the icon for your network adapter, and then click Move Up until the network adapter is at the top of the list. Leave the "Remote Access WAN Wrapper" entries in any order under the network adapter(s).
Note
If you have more than one network adapter, place the internal adapter (with Internet Protocol [IP] address 10.0.0.2 by default on a Small Business Server network) at the top of the binding order, with the external adapter(s) directly below the internal adapter.
The final order should appear similar to: [1] Network adapter one [2] Network adapter two (if present) [3] Remote Access WAN Wrapper . . . [n] Remote Access WAN Wrapper
g. Repeat step 6 for each service in the dialog box.
h. After you've verified the settings for each service, click All Protocols in the Show Bindings For box. The entry for "Remote Access WAN Wrapper" doesn't have a network adapter listed. Skip this item. Repeat steps 4 through 6 for each remaining protocol.
i. After you've verified that the bindings are set correctly for all services and protocols, click OK. This initializes the rebinding of the services. When this is complete, you're prompted to restart the computer. Click Yes.
More information
Active Directory changes do not replicate in Windows Server 2003 | https://docs.microsoft.com/en-us/troubleshoot/windows-server/identity/replication-error-1818 | 2020-09-18T23:35:12 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.microsoft.com |
Secret Store Plugin Development¶
This guide describes how to develop a custom secret store plugin for use by Barbican.
Barbican supports two storage modes for secrets: a secret store mode (detailed on this page), and a cryptographic mode. The secret store mode offloads both encryption/decryption and encrypted secret storage to the plugin implementation. Barbican includes plugin interfaces to a Red Hat Dogtag service and to a Key Management Interoperability Protocol (KMIP) compliant security appliance.
Since the secret store mode defers the storage of encrypted secrets to plugins, Barbican core does not need to store encrypted secrets into its data store, unlike the cryptographic mode. To accommodate the discrepancy between the two secret storage modes, a secret store to cryptographic plugin adapter has been included in Barbican core, as detailed in The Cryptographic Plugin Adapter section below.
secret_store Module¶
The
barbican.plugin.interface.secret_store module contains the classes
needed to implement a custom plugin. These classes include the
SecretStoreBase abstract base class which custom plugins should inherit
from, as well as several Data Transfer Object (DTO) classes used to transfer
data between Barbican and the plugin.
Data Transfer Objects¶
The DTO classes are used to wrap data that is passed from Barbican to the plugin as well as data that is returned from the plugin back to Barbican. They provide a level of isolation between the plugins and Barbican’s internal data models.
- class
barbican.plugin.interface.secret_store.
SecretDTO(type, secret, key_spec, content_type, transport_key=None)¶
This object is a secret data transfer object (DTO).
This object encapsulates a key and attributes about the key. The attributes include a KeySpec that contains the algorithm and bit length. The attributes also include information on the encoding of the key.
Secret Parameter Objects¶
The secret parameter classes encapsulate information about secrets to be stored within Barbican and/or its plugins.
- class
barbican.plugin.interface.secret_store.
SecretType¶
Constant to define the symmetric key type.
Used by getSecret to retrieve a symmetric key.
- class
barbican.plugin.interface.secret_store.
KeyAlgorithm¶
Constant for the Diffie Hellman algorithm.
Plugin Base Class¶
Barbican secret store plugins should implement the abstract base class
SecretStoreBase. Concrete implementations of this class should be exposed
to Barbican using
stevedore mechanisms explained in the configuration
portion of this guide.
- class
barbican.plugin.interface.secret_store.
SecretStoreBase¶
- abstract
delete_secret(secret_metadata)¶
Deletes a secret from the secret store.
Deletes a secret from a secret store. It can no longer be referenced after this call.
- Parameters
secret_metadata – secret_metadata
- abstract
generate_asymmetric_key(key_spec)¶
Generate a new asymmetric key pair and store it.
Generates a new asymmetric key pair and stores it in the secret store. An object of type AsymmetricKeyMetadataDTO will be returned containing attributes of metadata for newly created key pairs. The metadata is stored by Barbican and passed into other methods to aid the plugins. This can be useful for plugins that generate a unique ID in the external data store and use it to retrieve the key pairs in the future.
- Parameters
key_spec – KeySpec that contains details on the type of key to generate
- Returns
An object of type AsymmetricKeyMetadataDTO containing metadata about the key pair.
- abstract
generate_supports(key_spec)¶
Returns a boolean indicating if the secret type is supported.
This checks if the algorithm and bit length are supported by the generate methods. This is useful to call before calling generate_symmetric_key or generate_asymetric_key to see if the key type is supported before trying to generate it.
- Parameters
key_spec – KeySpec that contains details on the algorithm and bit length
- Returns
boolean indicating if the algorithm is supported
- abstract
generate_symmetric_key(key_spec)¶
Generate a new symmetric key and store it.
Generates a new symmetric key and stores it in the secret store. A dictionary is returned that contains metadata about the newly created symmetric key. The dictionary of metadata is stored by Barbican and passed into other methods to aid the plugins. This can be useful for plugins that generate a unique ID in the external data store and use it to retrieve the key in the future. The returned dictionary may be empty if the SecretStore does not require it.
- Parameters
key_spec – KeySpec that contains details on the type of key to generate
- Returns
an optional dictionary containing metadata about the key
- abstract
get_plugin_name()¶
Gets user friendly plugin name.
This plugin name is expected to be read from config file. There will be a default defined for plugin name which can be customized in specific deployment if needed.
This name needs to be unique across a deployment.
- abstract
get_secret(secret_type, secret_metadata)¶
Retrieves a secret from the secret store.
Retrieves a secret from the secret store and returns a SecretDTO that contains the secret.
The secret_metadata parameter is the metadata returned from one of the generate or store methods. This data is used by the plugins to retrieve the key.
The secret_type parameter may be useful for secret stores to know the expected format of the secret. For instance if the type is SecretDTO.PRIVATE then a PKCS8 structure is returned. This way secret stores do not need to manage the secret type on their own.
- Parameters
secret_type – secret type
secret_metadata – secret metadata
- Returns
SecretDTO that contains secret
get_transport_key()¶
Gets a transport key.
Returns the current valid transport key associated with this plugin. The transport key is expected to be a base64 encoded x509 certificate containing a public key. Admins are responsible for deleting old keys from the database using the DELETE method on the TransportKey resource.
By default, returns None. Plugins that support transport key wrapping should override this method.
is_transport_key_current(transport_key)¶
Determines if the provided transport key is the current valid key
Returns true if the transport key is the current valid transport key. If the key is not valid, then barbican core will request a new transport key from the plugin.
Returns False by default. Plugins that support transport key wrapping should override this method.
- abstract
store_secret(secret_dto)¶
Stores a key.
The SecretDTO contains the bytes of the secret and properties of the secret. The SecretStore retrieves the secret bytes, stores them, and returns a dictionary of metadata about the secret. This can be useful for plugins that generate a unique ID in the external data store and use it to retrieve the secret in the future. The returned dictionary may be empty if the SecretStore does not require it.
- Parameters
secret_dto – SecretDTO for secret
- Returns
an optional dictionary containing metadata about the secret
- abstract
store_secret_supports(key_spec)¶
Returns a boolean indicating if the secret can be stored.
Checks if the secret store can store the secret, give the attributes of the secret in the KeySpec. For example, some plugins may need to know the attributes in order to store the secret, but other plugins may be able to store the secret as a blob if no attributes are given.
- Parameters
key_spec – KeySpec for the secret
- Returns
a boolean indicating if the secret can be stored
Barbican Core Plugin Sequence¶
The sequence that Barbican invokes methods on
SecretStoreBase
depends on the requested action as detailed next. Note that these actions are
invoked via the
barbican.plugin.resources module, which in turn is invoked
via Barbican’s API and Worker processes.
For secret storage actions, Barbican core calls the following methods:
get_transport_key()- If a transport key is requested to upload secrets for storage, this method asks the plugin to provide the transport key.
store_secret_supports()- Asks the plugin if it can support storing a secret based on the
KeySpecparameter information as described above.
store_secret()- Asks the plugin to perform encryption of an unencrypted secret payload as provided in the
SecretDTOabove, and then to store that secret. The plugin then returns a dictionary of information about that secret (typically a unique reference to that stored secret that only makes sense to the plugin). Barbican core will then persist this dictionary as a JSON attribute within its data store, and also hand it back to the plugin for secret retrievals later. The name of the plugin used to perform this storage is also persisted by Barbican core, to ensure we retrieve this secret only with this plugin.
For secret retrievals, Barbican core will select the same plugin as was
used to store the secret, and then invoke its
get_secret()
method to return the unencrypted secret.
For symmetric key generation, Barbican core calls the following methods:
generate_supports()- Asks the plugin if it can support generating a symmetric key based on the
KeySpecparameter information as described above.
generate_symmetric_key()- Asks the plugin to both generate and store a symmetric key based on the
KeySpecparameter information. The plugin can then return a dictionary of information for the stored secret similar to the storage process above, which Barbican core will persist for later retrieval of this generated secret.
For asymmetric key generation, Barbican core calls the following methods:
generate_supports()- Asks the plugin if it can support generating an asymmetric key based on the
KeySpecparameter information as described above.
generate_asymmetric_key()- Asks the plugin to both generate and store an asymmetric key based on the
KeySpecparameter information. The plugin can then return an
AsymmetricKeyMetadataDTOobject as described above, which contains secret metadata for each of the three secrets generated and stored by this plugin: private key, public key and an optional passphrase. Barbican core will then persist information for these secrets, and also create a container to group them.
The Cryptographic Plugin Adapter¶
Barbican core includes a specialized secret store plugin used to adapt to
cryptographic plugins, called
StoreCryptoAdapterPlugin. This plugin
functions as a secret store plugin, but it directs secret related operations to
cryptographic plugins for
encryption/decryption/generation operations. Because cryptographic plugins do
not store encrypted secrets, this adapter plugin provides this storage
capability via Barbican’s data store.
This adapter plugin also uses
stevedore to access and utilize cryptographic
plugins that can support secret operations. | https://docs.openstack.org/barbican/latest/contributor/plugin/secret_store.html | 2020-09-19T00:32:40 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.openstack.org |
TAN(X) computes the tangent of X.
Fortran 77 and later, for a complex argument Fortran 2008 or later
Elemental function
RESULT = TAN(X)
The return value has same type and kind as X, and its value is in radians.
program test_tan real(8) :: x = 0.165_8 x = tan(x) end program test_tan
Inverse function: ATAN Degrees function: TAND
© Free Software Foundation
Licensed under the GNU Free Documentation License, Version 1.3. | https://docs.w3cub.com/gnu_fortran~7/tan/ | 2020-09-18T23:52:12 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.w3cub.com |
Multiple Accounts
TrackJS users can start new accounts and belong to multiple accounts.
Creating Accounts
To create a new account owned by your current user, go to the Switch Accounts page from the upper-right user menu. The last option in the list is to create a new account.
New accounts created this way have no trial period. You must choose a subscription for the account immediately upon creating the account.
Joining Accounts
To join an existing account, an Owner of the account must send you an invitation from their Team Management page. You will receive an email with a link to join the account. Once joined, the new account will be visible in your Switch Accounts listing. | https://docs.trackjs.com/user-accounts/multiple/ | 2020-09-18T23:01:24 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.trackjs.com |
Module: Commands
Overview
An extension can register a Commands Module by defining a
getCommandsModule
method. The Commands Module allows us to register one or more commands scoped to
specific contexts. Commands have several unique
characteristics that make them tremendously powerful:
- Multiple implementations for the same command can be defined
- Only the correct command's implementation will be run, dependent on the application's "context"
- Commands can be called from extensions, modules, and the consuming application
Here is a simple example commands module:
export default { id: 'example-commands-module', /** * @param {object} params * @param {ServicesManager} params.servicesManager * @param {CommandsManager} params.commandsManager */ getCommandsModule({ servicesManager, commandsManager }) { return { definitions: { sayHello: { commandFn: ({ words }) => { console.log(words); }, options: { words: 'Hello!' }, }, }, defaultContext: 'VIEWER', }; }, };
Each definition returned by the Commands Module is registered to the
ExtensionManager's
CommandsManager.
Command Definitions
The command definition consists of a named command (
myCommandName below) and a
commandFn. The command name is used to call the command, and the
commandFn
is the "command" that is actioned.
myCommandName: { commandFn: ({ viewports, other, options }) => { }, storeContexts: ['viewports'], options: { words: 'Just kidding! Goodbye!' }, context: 'ACTIVE_VIEWPORT::CORNERSTONE', }
Command Behavior
I have many similar commands. How can I share their
commandFn and make it
reusable?
This is where
storeContexts and
options come in. We use these in our
setToolActive command.
storeContexts helps us identify our
activeViewport,
and
options allow us to pass in the name of a tool we would like to set as
active.
If there are multiple valid commands for the application's active contexts
- What happens: all commands are run
- When to use: A
clearDatacommand that cleans up state for multiple extensions
If no commands are valid for the application's active contexts
- What happens: a warning is printed to the console
- When to use: a
hotkey(like "invert") that doesn't make sense for the current viewport (PDF or HTML)
CommandsManager
The
CommandsManager is a class defined in the
@ohif/core project. A single
instance of it should be defined in the consuming application, and it should be
used when constructing the
ExtensionManager.
Instantiating
When we instantiate the
CommandsManager, we need to pass it two methods:
getAppState- Should return the application's state when called
getActiveContexts- Should return the application's active contexts when called
These methods are used internally to help determine which commands are currently valid, and how to provide them with any state they may need at the time they are called.
const commandsManager = new CommandsManager({ getAppState, getActiveContexts, });
Public API
If you would like to run a command in the consuming app or an extension, you can use one of the following methods:
// Returns all commands for a given context commandsManager.getContext('string'); // Attempts to run a command commandsManager.runCommand('speak', { command: 'hello' }); // Run command, but override the active contexts commandsManager.runCommand('speak', { command: 'hello' }, ['VIEWER']);
The
ExtensionManager handles registering commands and creating contexts, so
most consumer's won't need these methods. If you find yourself using these, ask
yourself "why can't I register these commands via an extension?"
// Used by the `ExtensionManager` to register new commands commandsManager.registerCommand('context', 'name', commandDefinition); // Creates a new context; clears the context if it already exists commandsManager.createContext('string');
Contexts
It is up to the consuming application to define what contexts are possible, and which ones are currently active. As extensions depend heavily on these, we will likely publish guidance around creating contexts, and ways to override extension defined contexts in the near future. If you would like to discuss potential changes to how contexts work, please don't hesistate to createa new GitHub issue.
Some additional information on Contexts can be found here. | https://docs.ohif.org/extensions/modules/commands.html | 2020-09-18T23:58:31 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.ohif.org |
Enable Multi-Factor Authentication
By default, when a user signs in to the user portal, they sign in with their email address and password (the first factor). This is the default authentication mechanism used in AWS SSO. But when multi-factor authentication (MFA) is enabled, users enter an MFA code (the second factor) that is generated by an application on their phone. Users must use this MFA code to be authenticated to the user portal. These factors together provide additional security by preventing access to your AWS organization unless users supply valid user credentials and a valid MFA code.
Topics
Considerations Before Using MFA in AWS SSO
Before you enable MFA, consider the following information:
All users must have access to a physical device that can have applications installed on it, like a smartphone or tablet. Such a device is required before users can sign in using MFA. Therefore, you will need either to provide a device to each user or send them instructions on how they can register their own personal devices. For more information, see Authenticator Applications on User Devices.
Do not use the option Require Them to Provide a One-Time Password Sent by Email if your users must sign in to the user portal to access their email. For example, your users might use Office 365 on the user portal to read their email. In this case, users would not be able to retrieve the verification code and would be unable to sign in to the user portal. For more information, see Require Them to Provide a One-Time Password Sent by Email.
If you are already using RADIUS MFA that you configured with AWS Directory Service, then you do not need to enable MFA within AWS SSO. MFA is an alternative to RADIUS MFA for Microsoft Active Directory users of AWS SSO. For more information, see RADIUS MFA.
MFA in AWS SSO is not supported for use by external identity providers.
Authentication Methods
Authentication methods help you determine the level of security that you want to enforce across all your users during sign-in. MFA has the following methods available:
Context-aware
Always-on
Disabled
You can configure AWS SSO to use a connected directory and decide to choose either
the
Context-aware or Always-on option. In these
cases, your users must sign in to the user portal using the down-level logon name
format
(DOMAIN\UserName). This restriction does not apply when you are using an AWS SSO store.
With
an AWS SSO store, users can sign in using either their down-level logon name format
or their
UPN logon name format ([email protected]
Context-Aware
Context-aware is the default setting when you first configure AWS SSO. In this mode, AWS SSO analyzes the sign-in context (browser, location, and devices) for each user. AWS SSO then determines whether the user is signing in with a previously trusted context. If a user is signing in from an unknown IP address or is using an unknown device, SSO prompts the user for multi-factor authentication. The user is prompted for an MFA code in addition to their email address and password credentials.
This mode provides additional protection for users who frequently sign in from their offices. This mode is also easier for those users because they do not need to complete MFA on every sign-in. SSO prompts users with MFA once and permits them to trust their device. Once a user indicates that they want to trust a device, AWS SSO considers future sign-ins to be “trusted.” AWS SSO does not challenge the user for an MFA code when they use that trusted device. Users are only required to provide additional verification when their sign-in context changes. Such changes include signing in from a new device, a new browser, or an unknown IP address.
Changing from Disabled mode to Context-aware mode overrides existing RADIUS MFA settings that are configured in AWS Directory Service for sign-in to AWS SSO for this directory. For more information, see RADIUS MFA.
Always-On
In this mode, AWS SSO requires that users who have registered an MFA device provide an MFA code on every sign-in. You should use this mode if you have organizational or compliance policies that require your users to complete MFA every time they sign in to the user portal. For example, PCI DSS strongly recommends MFA during every sign-in to access applications that support high-risk payment transactions.
Disabled
While in this mode, no MFA authentication method is enabled. Users continue to sign in using their user name, password and/or RADIUS MFA as normal.
MFA Device Enforcement
The following options can be used to determine whether your users must have a registered MFA device when signing in to the user portal. These options also determine the method by which your users will receive their MFA code.
Allow Them to Sign In
Allow them to sign in is the default setting when you first configure AWS SSO MFA. Use this option to indicate that MFA devices are not required in order for your users to sign in to the user portal. Users who chose to register MFA devices will still be prompted for MFA codes.
Block Their Sign-In
Use the Block Their Sign-In option when you want to enforce MFA use by every user before they can sign in to AWS.
If your authentication method is set to Context-aware a user might select the This is a trusted device check box on the sign-in page. In that case, that user will not be prompted for an MFA code even if you have the Block their sign in setting enabled. If you want these users to be prompted, change your authentication method to Always On.
Require Them to Provide a One-Time Password Sent by Email
Use this option when you want to have verification codes sent to users by email. Because email is not bound to a specific device, this option does not meet the bar for industry-standard multi-factor authentication. But it does improve security over having a password alone. Email verification will only be requested if a user has not registered an MFA device. If the Context-aware authentication method has been enabled, the user will have the opportunity to mark the device on which they receive the email as trusted. Afterward they will not be required to verify an email code on future logins from that device, browser, and IP address combination.
If you are using Active Directory as your SSO enabled Identity source, the email
address used will always be based on the AD ‘
RADIUS MFA
Remote Authentication Dial-In User
Service (RADIUS)
You can use either RADIUS MFA or MFA in AWS SSO for user sign-ins to the user portal, but not both. MFA in AWS SSO is an alternative to RADIUS MFA in cases where you want AWS native two-factor authentication for access to the portal.
When you enable MFA in AWS SSO, your users need an MFA code to sign in to the AWS SSO user portal. If you had previously used RADIUS MFA, enabling MFA in AWS SSO effectively overrides RADIUS MFA for users who sign in to the user portal. However, RADIUS MFA continues to challenge users when they sign in to all other applications that work with AWS Directory Service, such as Amazon WorkDocs.
If your MFA is Disabled on the AWS SSO console and you have configured RADIUS MFA with AWS Directory Service, RADIUS MFA governs user portal sign-in. This means that AWS SSO falls back to RADIUS MFA configuration if MFA is disabled.
Authenticator Applications on User Devices
Your users can use their internet accessible devices, such as a smartphone or tablet, as an MFA device. To do this, users must install an AWS supported mobile app that generates a six-digit authentication code.
Because these apps can run on unsecured mobile devices, MFA might not provide the same level of security as U2F devices or hardware MFA devices. You can enable only two MFA devices per user.
For a list of MFA apps that you can use on smartphones or tablets, see Multi-Factor Authentication | https://docs.aws.amazon.com/ko_kr/singlesignon/latest/userguide/enable-mfa.html | 2020-09-19T00:36:21 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.aws.amazon.com |
Everest Forms Documentation
With the installation of the Everest Forms (Free Version), you can find 11 form fields in the form builder. These form fields are more than enough to create any contact form for your website.
To get you familiar with the form fields, we have made a list of the fields with their descriptions.
The First Name and Last Name fields allow you to retrieve the first and last names of your users respectively.
This form field allows you to retrieve any one-line text information from the user as per your requirements. This field comes in very handy when you need to add a custom field to your form which isn’t already on the form builder.
After you insert this field to your form, go to the field options. Here, you can change the label, description, and other advanced settings from the Field Options.
In the Advanced Options, you can see the settings such as:
This form field allows you to retrieve any type of information (usually a paragraph) from the user. You can use this field to get descriptive and lengthy responses from your users.
Like any other form field, you can click to edit the form label, description, and more form options
To change the form Field Options, you must click on the Paragraph field. Now, you can see the various options such as Label, Meta Key, Description and Required. Also, there is the Advanced Options (Same as Single Line Text field)where you can set the Placeholder Text, Hide Lable, Limit Length, Default value, CSS Class and Input Mask.
This field allows you to add multiple options for the user to choose from. The options are presented in a drop-down menu, and users can select only one option.
After you insert the field, go to the Field Options. Here, you can change the label, description, and add as many options as you require to your dropdown form field.
This field allows you to add multiple choices that are displayed by radio buttons.
In the Field Options, you can add as many choices as you like but, the user can only choose one.
Now, Everest Forms 1.6.0 allows you to use Images in your choices. This means you can easily upload images to your Multiple Choice field as options to display in your forms.
For this, go to the Field Options and you can see the Use Image Choices options there.
After you check that option, you can easily upload the images for each of your choices. Just click on the Upload Image button to select the required image from your computer or your media library.
Also, you can even change or remove the selected images easily in the field options. To remove the picture, click on the Remove button and click on the Change Image button to select a new image that is required.
This field allows you to add multiple choices that are displayed by checkboxes.
In the Field Options, you can add as many choices as you require, and the user can check(select) more than one option.
Just like the Multiple Choice field, you can also use Images for the options in the Checkbox field. The process is the same as the Multiple Choice field. Go to Field Options and check the Use Image Choices options. Then, you can easily upload the required images as your choices for the Checkbox field.
Now you can add multiple options for these fields at once. You can add the options in Bulk if you have a lot of them. The Bulk Choice Add feature will work for Dropdown Field, Multiple Choice Field, and the Checkboxes fields. Since the process is similar for all three fields, I will show you an example using the feature in the Checkboxes Field.
First, Drag the field into the form and then click on it to see its Field Options. In the Field Options, you will see the Bulk Add link.
Clicking on it will reveal the Bulk Add option Input section. Now, you have 3 ways to add the options. 1. You can copy and paste the options from any file where you have listed it out. Make sure that each of the options is on separate lines. Separating them with commas won’t help. 2. You can simply Add the choices from the provided Presets. Just Click on Presets and it will show you a list of category which has predefined options. You can directly add from there. 3. You can Manually Type them by breaking lines. Separating them with commas won’t work. Type one option hit enter, and then type another and follow so on. See example: Finally, Click on the Add New Choices button, and the choices will be added and shown in the field options. That’s all about Adding Bulk Choices.
This field allows you to receive any numerical information in your forms. You can use this field for accepting users’ contact numbers or any other numerical values.
From the Field Options, you can change the field label according to your requirements. Furthermore, you can change additional settings for this field in the Advanced Options.
This field allows users to enter their email addresses. And, checks if the user has entered a valid email address on the frontend.
In the Field Options, you can see the Label, Description, Required and Enable Email Confirmation Options. So, you can change the Fields Option as per your requirements.
This field allows users to enter their company website URL or personal URL.
This field allows users to enter the requested date and time on your form.
In the Field Options, you can change the Label, Format, and Description. Also, there is the Advanced Options where you can change more options for the Date / Time field.
Now, you can choose the dates when you want to disable in the form. The users will not be able to select the disabled dates on the frontend.
To disable dates, click on the field and a calendar will appear where you can see the month, year and dates for the month. Here, simply select the dates that you want to disable.
You can also select the month and year and disable the dates you want to disable.
So, you can see that the disabled dates are not displayed in the frontend of the form and cannot be selected by the users.
There are more form fields that Everest Forms offers. But these form fields are inaccessible in the free version.
Therefore, you need to purchase a pro plan of Everest Forms to unlock the pro form fields.
Name: * | https://docs.wpeverest.com/everest-forms/docs/everest-forms-form-fields/ | 2020-09-18T22:21:34 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.wpeverest.com |
I am not sure which license (Product ID) I am using. How can I retrieve it?
For version 3.4 or later, you can confirm your license information at “About Brekeke XXX” (such as Brekeke SIP Server, Brekeke PBX) from the setting menu on the left-upper corner.
For versions v3.3 or earlier, the information is located at the bottom-left corner of the [Restart/Shutdown] page under the product ID display. Please note that it shows only the last 10 digits of your product ID.
After getting the last 10 digits of the product ID, we suggest you search using the partial license information in your email archive to find out the product ID. If you still can’t find the product ID, try “I lost my license” or contact us via email at license [at] brekeke.com
See also:
I lost my license (Product ID) | https://docs.brekeke.com/lic/i-am-not-sure-which-license-product-id-i-am-using-how-can-i-retrieve-it | 2020-09-19T00:29:40 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.brekeke.com |
# cat /etc/ansible/ansible.cfg
A node host will access the network to install any RPMs dependencies, such as
atomic-openshift-*,
iptables, and
CRI-O or
Docker. Pre-installing these
dependencies, creates a more efficient install, because the RPMs are only
accessed when necessary, instead of a number of times per host during the
install.
This is also useful for machines that cannot access the registry for security purposes.
The OKD install method uses Ansible. Ansible is useful for running parallel operations, meaning a fast and efficient installation. However, these can be improved upon with additional tuning options. See the Configuring Ansible section for a list of available Ansible configuration options.
Network subnets can be changed post-install, but with difficulty. It is much easier to consider the network subnet size prior to installation, because underestimating the size can create problems with growing clusters.
See the Network Optimization topic for recommended network subnetting practices. | https://docs.okd.io/3.11/scaling_performance/install_practices.html | 2020-09-19T00:18:10 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.okd.io |
Deleting Errors
You can delete errors from an individual error occurrence, or a rollup report for any of the error properties. For example, if you “Delete All” from a URL report, you will delete all errors from that URL, regardless of message, browser, user, or other properties.
Errors are deleted asynchronously, and it might take up to 60 seconds for all of the errors matching your request to be removed. Deleting errors does not prevent new errors with the same property from being captured.
The delete action removes the errors from our storage system. Deleted errors are not recoverable.
When To Use
You should delete errors that are not longer valuable for you to record. It is particularly useful if you have just deployed fixes to your system and you expect the errors to be resolved.
Alternatively, you can also track error resolution by providing a version for your application and filtering your UI.
You may also want to delete errors that are caused by a noisy individual user, such as a user browsing your site with a buggy browser extension. | https://docs.trackjs.com/data-management/delete/ | 2020-09-18T22:35:01 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.trackjs.com |
Creating¶
You can create widgets that appear in the Content Edit Page by extending the abstract class ContentEditWidget. That abstract class includes three abstract methods that you must override, all of which only support displaying data. For information about creating content edit widgets that can update data, see Creating Updating Content Edit Widgets.
Step 1: Declare a Custom Content Edit Widget
package widgets; import com.psddev.cms.tool.ContentEditWidget; public class CurrentTimesWidget extends ContentEditWidget { }
Step 2: Display the Widget’s Header
You display the custom widget’s header by implementing the method
getHeading.
public String getHeading(ToolPageContext page, Object content) { /* Return string appearing in widget's title bar. */ }
Step 3: Display the Widget’s Body
You display the custom widget’s body by implementing the method
display.
public void display(ToolPageContext page, Object content, ContentEditWidgetPlacement placement) throws IOException { /* Use page.write methods to display the widget's body. */ }
Step 4: Display Custom Widget in Content Edit Page
The following snippet shows an entire class for displaying the custom widget Current Time Zones in the content edit page. The custom widget lists several cities and their current time.
In the previous snippet—
- Line 9 declares the class
CurrentTimesWidget. Objects instantiated from this class appear as Current Times Widget in the Dashboard.
- Lines 13–16 instantiate a HashMap of time zones.
- Lines 20–27 write a table header in the widget’s body.
- Lines 29–39 loop through each time-zone record. For each record, call the method
displayTimeto retrieve and then print the time zone’s current time.
- Line 44 positions the widget under the content edit form. For details about positioning a widget on the content edit form, see Position.
- Line 48 displays the widget’s heading.
Based on the previous snippet, the custom widget Current Times appears at the bottom of the content edit page.
By default, Brightspot displays custom widgets on the content edit page for all content types. You can suppress a custom widget for individual content types; for details, see Hiding.
See also: | http://docs.brightspot.com/cms/developers-guide/widgets/creating.html | 2018-04-19T11:48:16 | CC-MAIN-2018-17 | 1524125936914.5 | [array(['../../../_images/custom-widget-content-edit-form.png',
'../../../_images/custom-widget-content-edit-form.png'],
dtype=object) ] | docs.brightspot.com |
Some AppDNA algorithms analyze application DNA against one or more operating system (OS) images. Those algorithms:
When relevant, these algorithms interrogate the OS image DNA that has been loaded into the AppDNA database. For example, the Internet Explorer report checks the registry entries in the Windows OS image to see whether relevant ActiveX components are registered.
The analysis shows the effects of changes when applications are migrated between platforms. AppDNA provides a set of default OS images for each relevant OS family. You can also import your own custom OS images.
Best practices
Algorithms that test applications for dependencies on features provided by the OS are referred to as OS image-dependent algorithms. These algorithms check a variety of OS image information, including:
Most of the OS image-dependent algorithms simply check the OS images in the target OS family. When you analyze your applications for a report that contains an OS image-dependent algorithm, AppDNA checks the information in every OS image in the relevant OS family that has been imported into AppDNA.
The algorithm results might differ for each OS image. Therefore when you view the results in one of the report views, the algorithm results and the application's overall RAG status may change depending on which OS image you select.
When you import an OS image into AppDNA, you specify whether it is a legacy or target OS image and its relationships with the other OS images that have been loaded into AppDNA. For example, suppose you are working on a migration from Windows XP to Windows 8 and your organization has standard laptop images for Windows XP and Windows 8. When you import them into AppDNA, you would define:
AppDNA then calculates and stores information about APIs, features, GPOs, and other settings that are in the legacy image but not in the target image. This is referred to as the OS image delta.
The OS image delta algorithms detect applications that rely on features in the OS image delta and are likely to fail on the target platform. When you analyze your applications for a report that contains an OS image delta algorithm, AppDNA checks the OS image delta for every pair of relevant OS images (Windows XP and Windows 8 in the example) that have been configured as legacy and target OS images for each other. Therefore when you view the results in one of the report views, the results may change depending on which legacy and target OS images you select. Typically you would set up your main (base or "gold") OS image for an OS family as the default OS image for that OS family.
Some of the OS image delta algorithms also check the application portfolio for applications that supply the missing features. The algorithm portfolio in this context is all of the applications that have been imported into AppDNA when the analysis is run. For example, suppose Windows XP supplies a particular DLL that Windows 8 does not supply by default. This means that applications that rely on that DLL will not work by default on Windows 8. However, sometimes the DLL might be installed automatically with another application.
Typically the OS image delta algorithms come in pairs:
Because the results for both algorithms in the pair depend on which other applications have been imported, the results may change if you re-analyze your applications after you have imported more applications.
By importing your own images, AppDNA can base its analysis on the images you actually use in your environment rather than the default images. You can optionally import more than one image for each OS family. This is useful when your organization has two (or more) corporate builds of the OS – one for laptops and one for desktops, for example.
After you import one of your own OS images, you specify its relationships with the other images that have been imported. For example, suppose you are working on a migration from Windows XP to Windows 8.1 and your organization has standard laptop and desktop images for both of those OSs. You would import the four images and configure them to define the Windows XP laptop image as the legacy image for the Windows 8.1 laptop image, and the Windows XP desktop image as the legacy image for the Windows 8.1 desktop image. The following diagram represents these relationships.
Then when you analyze your applications for the Windows 8.1 report, AppDNA compares the changes between the Windows XP and Windows 8.1 laptop images and between the Windows XP and Windows 8.1 desktop images. To view the reports, you choose whether you want to view the report for the laptop images or the desktop images.
You also define the default OS image or pair of OS images for each report that performs OS image analysis. You do this in OS Image Configuration Settings.
You can define more than one legacy OS. Specify legacy operating systems in the Configure Modules Wizard. | https://docs.citrix.com/es-es/dna/7-8/analyzing-apps/operating-systems.html | 2018-04-19T11:48:46 | CC-MAIN-2018-17 | 1524125936914.5 | [] | docs.citrix.com |
Applied IValueConverter on the DataMemberBinding
Why isn’t this converter used when performing filtering?
When performing filtering operations over the RadGridView, IValueConverters are used for presentation purposes only. They play no part in the filtering mechanism and filtering would always be performed on the raw data values. You should be careful when using converters in order to avoid duplication of the content in the list of distinct values to filter on.
The GridViewColumn has a property called FilterMemberPath. You can use this property to tell the column to filter on a property different from the one it displays in its cells. In case the Type of the bound property cannot be automatically discovered by the data engine, you can “help” the column by setting the FilterMemberType property.
You can also check the FilterMemberPath documentation.
Grouping, on the other hand, would respect the converted values and duplicated groups would not be created. | https://docs.telerik.com/devtools/wpf/controls/radgridview/filtering/faq/ivalueconverter-and-filtering | 2018-04-19T12:03:10 | CC-MAIN-2018-17 | 1524125936914.5 | [] | docs.telerik.com |
sp_createstats (Transact-SQL)
Creates single-column statistics for all eligible columns for all user tables and internal tables in the current database. The new statistic has the same name as the column where it is created.
Transact-SQL Syntax Conventions
Syntax
sp_createstats [ [ @indexonly = ] 'indexonly' ] [ , [ @fullscan = ] 'fullscan' ] [ , [ @norecompute = ] 'norecompute' ]
Arguments
- [ .
Return Code Values
0 (success) or 1 (failure)
Result Sets
None
Remarks.
Permissions
Requires membership in the db_owner fixed database role.
Examples
The following example creates statistics for all eligible columns for all user tables in the current database.
EXEC sp_createstats;
The following example creates statistics for only the columns that are participating in an index.
EXEC sp_createstats 'indexonly';
See Also
Reference
Database Engine Stored Procedures (Transact-SQL)
CREATE STATISTICS (Transact-SQL)
DBCC SHOW_STATISTICS (Transact-SQL)
DROP STATISTICS (Transact-SQL)
System Stored Procedures (Transact-SQL)
UPDATE STATISTICS (Transact-SQL)
Help and Information
Getting SQL Server 2005 Assistance | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms186834(v=sql.90) | 2018-04-19T12:39:38 | CC-MAIN-2018-17 | 1524125936914.5 | [] | docs.microsoft.com |
Uninstallation¶
Important
Backup Code, database before doing this.
Uninstall an extension completely¶
- Delete the following files, folders::
- app/code/Mageplaza/EXTENSION_NAME
- Run the following command line::
- php bin/magento setup:upgrade
Delete database tables: (Optional)
Open a Mysql mangement such as PHPMyAdmin
Open your database > Find database with prefix: mageplaza_EXTENSION_NAME
Just delete all the database tables related to mageplaza_EXTENSION_NAME | http://docs.mageplaza.com/kb/uninstallation.html | 2018-04-19T11:16:56 | CC-MAIN-2018-17 | 1524125936914.5 | [] | docs.mageplaza.com |
Learn how to send Redis data collected by collectd to Wavefront.
Redis is an in-memory data structure store, often used as a database, cache and message broker. Wavefront supports an in-product integration that gets data from Redis using Telegraf. If you want to use collectd instead, follow the instructions on this page.
We recommend the collectd Redis Python plugin. See [collectd Redis plugin documentation] (). There are 2 types of Redis nodes that can be monitored with collectd: Masters and Slaves.
Installation
- To monitor a Redis master node, on your collectd host, copy the example configuration into
/etc/collectd/managed_config/.
- To monitor a Redis slave node, on your collectd host, copy the example configuration into
/etc/collectd/managed_config/.
- Edit the settings in the file for your Redis servers.
- Restart collectd. | https://docs.wavefront.com/integrations_collectd_redis.html | 2018-04-19T11:41:53 | CC-MAIN-2018-17 | 1524125936914.5 | [] | docs.wavefront.com |
Event names
AppSignal instrumentation creates events for pieces of code that are instrumented. These events are used to calculate the code execution duration and Garbage Collection of separate libraries and code blocks.
Instrumenting allows AppSignal to detect problem areas in code. It makes it possible to see if an application's database queries are slow or if a single API call is causing the most slow down in a request.
To instrument code accurately every event name follows a very specific naming method. Picking a good name can help a lot with how AppSignal processes and displays the incoming data. Naming events can be difficult, but hopefully this short explanation of how an event name is used will help you with picking a good one.
For more about instrumentation read more in our (Custom) instrumentation guide.
Event groups
Every event created by AppSignal instrumentation has a name. In this name the parent group of an event is also present. This group name allows AppSignal to group together events from database queries and view rendering. Using this grouping we can create overviews that show execution times per group as well as single events.
For example, our Rails integration creates these groups:
active_record,
action_view,
action_controller, etc.
Event naming
An event name is a string consisting of alphanumeric characters, underscores
and periods. Spaces and dashes are not accepted. Think of this regex,
([a-zA-Z0-9_.]+), if it is accepted by this regex the event name is accepted.
Let's start with a simple event name.
The action of an event name is everything until the last period
. in a key.
The group is everything after this period.
The group of an event is the code library it belongs to or the kind of action it is, such as a database query or HTTP request.
It also works with multiple periods in a key.
We use this last-naming-scheme for the Ruby method instrumentation ourselves.
When a name with just one part is encountered the event will automatically be
grouped under the
other group.
Examples
Some examples of keys that are used by AppSignal integrations:
- ActiveRecord:
sql.active_record
- Redis:
query.redis
- Elasticsearch:
search.elasticsearch
- ActionView:
render_template.action_viewand
render_partial.action_view
- Ruby's Net::HTTP:
request.net_http
- Sidekiq:
perform_job.sidekiq
- Ruby method instrumentation:
method_name.ClassName.other, and;
method_name.class_method.NestedClassName.ParentModule.other | https://docs.appsignal.com/api/event-names.html | 2021-10-16T06:29:41 | CC-MAIN-2021-43 | 1634323583423.96 | [] | docs.appsignal.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.