content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
DistCp data copy matrix: HDP1/HDP2 to HDP2
To copy data from HDP1 and HDP2 clusters to HDP2 clusters using DistCp, you must configure and make changes to the settings of the source and destination clusters.
The following table provides a summary of configuration, settings and results when using DistCp to copy data from HDP1 and HDP2 clusters to HDP2 clusters.
For the above table:
The term "secure" means that Kerberos security is set up.
HDP 2.x means HDP 2.0 or later.
hsftp is available in both HDP-1.x and HDP-2.x. It adds https support to hftp. | https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/administration/content/distcp_data_copy_matrix_hdp1_hdp2_to_hdp2.html | 2018-09-18T20:32:57 | CC-MAIN-2018-39 | 1537267155676.21 | [] | docs.hortonworks.com |
The Azure Blob Storage interface for Hadoop supports two kinds of blobs, block blobs and page blobs.
Block blobs, which are used by default, are suitable for most big-data use cases such as input data for Hive, Pig, analytical map-reduce jobs, and so on.
Page blobs can be up to 1TB in size, larger than the maximum 200GB size for block blobs. Their primary use case is in the context of HBase write-ahead logs. This is because page blobs can be written any number of times, whereas block blobs can only be appended up to 50,000 times, at which point you run out of blocks and your writes fail. This wouldn't work for HBase logs, so page blob support was introduced to overcome this limitation.
In order to have the files that you create be page blobs, you must set the configuration variable
fs.azure.page.blob.dirin
core-site.xmlto a comma-separated list of folder names. For example:
<property> <name>fs.azure.page.blob.dir</name> <value>/hbase/WALs,/hbase/oldWALs,/data/mypageblobfiles</value> </property>
To make all files page blobs, you can simply set this to
/.
You can set two additional configuration properties related to page blobs. You can also set them in
core-site.xml:
The configuration option
fs.azure.page.blob.sizedefines the default initial size for a page blob. The parameter value is an integer specifying the number of bytes. It must be 128MB or greater, but no more than 1TB.
The configuration option
fs.azure.page.blob.extension.sizedefines the page blob extension size. This determines the amount by which to extend a page blob when it becomes full. The parameter value is an integer specifying the number of bytes. It must be 128MB or greater, specified as an integer number of bytes. | https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/bk_cloud-data-access/content/wasb-page-blob.html | 2018-09-18T20:33:03 | CC-MAIN-2018-39 | 1537267155676.21 | [] | docs.hortonworks.com |
.
sale.update
- Removed the embedded
register_saleentity from payments
This data was always been present at the top level of the sale object and therefore redundant. We're removing it to reduce the size of the payload and improve the performance of the payload generation.
- Removed
outlet_idattribute from the embedded user
This attribute has been replaced by an array of id's and is currently obsolete.
- Removed
market_idfrom sale payload
This was always an internal attribute without any meaning to external applications.
product.update
- Removed
tax_idand
taxfrom
price_book_entries
Presence of those attributes was attributes was misleading and could result in incorrect assumptions. In situations where different outlets had different default taxes assigned to them, using the
taxvalue and
tax_idfrom product webhook could result in assuming the wrong value for some outlets. Instead of using those attributes, the
idfor the tax for a specific outlet should be taken from the
taxesarray and the value should be calculated based on the rate specific to that tax.
inventory.update
- The
attributed_coston the embedded product object the will always be null
Attributed cost is an outlet specific attribute and should, therefore, be taken from the top-level inventory record instead of the embedded product object..com. | https://docs.vendhq.com/blog/webhook-changes | 2018-12-10T05:02:49 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.vendhq.com |
EDD REST API - Introduction
Since v1.5, Easy Digital Downloads includes a complete RESTful API that allows store data to be retrieved remotely in either a jSON or XML format. The API includes methods for retrieving info about store products, store customers, store sales, and store earnings.
The API is accessed via the edd-api end point of your store, like so:
NOTE: If you are getting a 404 error when visiting that link above, you may need to re-save your permalinks. Do this by going to Dashboard → Settings → Permalinks → Save.
In order to access the API, you will need to provide a valid public API key and also a valid token. An API key and token can be generated for any user by going to Downloads → Tools → API Keys:
The secret key is used for internal authentication and should never be used directly to access the API.
Individual users may go to their own profile and find their own key:
Once you have an API key, you can begin utilizing the EDD API. Both the API key and the token need to be appended to the URL as query parameters, like so:<API key here>&token=<token here>
Paging Parameters
By default, the EDD API will return 10 results per page for the customers, sales, and products queries.
If a query has 20 results, the first ten will be displayed by default, but then the second 10 can be accessed by adding &page=2 to the query string, like so:
You can change the number of results returned by using the number parameter. This example will return 25 results per page:
If you want to retrieve all results (no paging), set number to -1. | https://docs.easydigitaldownloads.com/article/1131-edd-rest-api-introduction | 2018-12-10T05:00:34 | CC-MAIN-2018-51 | 1544376823303.28 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/5bae5caa042863158cc6e05c/file-2MJg74dJUS.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/567ad6e7c69791436155960b/file-YeIA74gMTx.png',
None], dtype=object) ] | docs.easydigitaldownloads.com |
This option is only available if you have not selected the 3.11 Table Compile Options check box.
Select this option to force the generation and compilation of the OAM. Only one OAM of the same name can be in the system.
Compiling a Visual LANSA table:
Compiling a non-Visual LANSA table:
Also See
3.11.3 Rebuild indexes and views | https://docs.lansa.com/14/en/lansa015/content/lansa/l4wtgu03_0680.htm | 2018-12-10T03:44:55 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.lansa.com |
cupy.ElementwiseKernel¶
- class
cupy.
ElementwiseKernel(in_params, out_params, operation, name='kernel', reduce_dims=True, preamble='', no_return=False, return_tuple=False, **kwargs)¶
User-defined elementwise kernel.
This class can be used to define an elementwise__()¶
Compiles and invokes the elementwise kernel.
The compilation runs only if the kernel is not cached. Note that the kernels with different argument dtypes or dimensions are not compatible. It means that single ElementwiseKernel object may be compiled into multiple kernel binaries.
Attributes | https://docs-cupy.chainer.org/en/latest/reference/generated/cupy.ElementwiseKernel.html | 2018-12-10T05:31:05 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs-cupy.chainer.org |
Occurs when the specified objects are about to be deleted.
Namespace: DevExpress.ExpressApp
Assembly: DevExpress.ExpressApp.v18.2.dll
event EventHandler<ObjectsManipulatingEventArgs> ObjectDeleting
Event ObjectDeleting As EventHandler(Of ObjectsManipulatingEventArgs)
The ObjectDeleting event handler receives an argument of the ObjectsManipulatingEventArgs type. The following properties provide information specific to this event.
Raise this event before objects are marked as deleted from a dataset (see BaseObjectSpace.Delete). Use the handler's ObjectsManipulatingEventArgs.Objects parameter to get the object(s) to be deleted.
When implementing the IObjectSpace interface in the BaseObjectSpace class' descendant, you don't have to declare this event. It's declared within the BaseObjectSpace class. In addition, the BaseObjectSpace.OnObjectDeleting method raises this event. So, you should only invoke the OnObjectDeleting method before objects are marked as deleted. | https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.IObjectSpace.ObjectDeleting | 2018-12-10T03:57:05 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.devexpress.com |
Using the MOM Connector Framework Class Library
The MOM Connector Framework (MCF) is a .NET Framework class library that supports the development of custom connectors between MOM and other management products. The MCF can be used as either a local class library or a Web Service.
The advantages of using the MCF to build a MOM connector include:
- Support for connector applications running on non-Windows platforms through the MCF Web Service.
- A consistent, reusable framework for developing connector applications.
- Increased performance compared to other methods of querying alert data.
- Access to an alert's Product Knowledge content.
For more information about using the MCF, see the following topics: | https://docs.microsoft.com/en-us/previous-versions/system-center/developer/aa505281(v=msdn.10) | 2018-12-10T04:31:31 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.microsoft.com |
The online catalogue
The online catalogue is a searchable list of items you stock, driven by data in mSupply. The information can easily be updated and items added to or removed from the catalogue. By giving users access to the online catalogue you can, for example, replace the expensive printing of hard copy catalogues.
Setup
1) Tell mSupply which store to run the catalogue from
Choose File > Preferences… from the menus, and on the Web server tab select the store in the Default store for web interface drop down list. If the mSupply web server isn't already running you'll also need to click on the Start web server button. More information about these options can be found here.
2) Tell mSupply which items are to be included in the catalogue
To do this, from the menus choose Item > Show items…, click on the Find button and double click on an item you want to appear in the catalogue. This will open up the Item's details window. Click on the Misc tab on the left hand side and the screen will look like this:
In the Price list section:
- Check the On price list checkbox (any item with this checked will appear in the catalogue)
- Enter the pack size of this item that is going to appear in the catalogue in the Catalogue pack size textbox
- Enter the price for this pack size in the Catalogue price textbox. This is optional and can be left at 0 if you don't want the price included in your catalogue.
Repeat these steps for each item you want to appear in your catalogue. As with many repetitive tasks in mSupply, the OK & Next and OK & Previous buttons are your friends here.
And that's it. Setup is complete and you are now ready for users to view your catalogue.
Operation
Once the mSupply web server is running users access the catalogue using a browser. The address to visit is
where example.com is the domain of your web server.
This is what the user will see:
The catalogue can be searched by either item name (the top section) or the categories that items belong to (bottom section).
Searching by item name
To search by item name enter something in the top textbox and select the comparator in the drop down list next to it. These are the options you can choose from:
Then click on the top Search button and mSupply will search for items with names matching the options you have entered. When the search is complete the item detail screen (shown below) will be displayed and you can browse the items found.
Searching by category
The category used to search for items in the catalogue is item category 1. This category is hierarchical and has 3 levels. For more details about this category, including setting it up and assigning it to items, see here. Note that in the catalogue, Top level corresponds to level 1 of category 1, Mid level to level 2 and Bottom level to level 3.
To search by category, click on one of the 3 Search buttons in the lower section of the search screen. When you do that, mSupply will search for items belonging to the category of the level of category 1 you selected in the corresponding drop down list. If the All option is selected then mSupply will search for items belonging to all the corresponding categories at that level of category 1.
When you select an option other than All in the Top level category drop down list, the options in the Mid level drop down list are changed to be all the children of the top level category you selected. And when you select a Mid level category, the options in the Bottom level drop down list change to be the children of that mid level category.
The Bottom level category drop down list has an additional “None” option. Using this option will search for all items which are not assigned to a category 1 category.
The Item detail screen
The screen looks like this:
When you've finished browsing the items displayed you can click on the Search button on the top left hand side to return to the search screen, where you can perform another search if required.
Previous: The dashboard Next: The mSupply mobile API | http://docs.msupply.org.nz/web_interface:online_catalogue | 2018-12-10T03:57:37 | CC-MAIN-2018-51 | 1544376823303.28 | [array(['/_media/web_interface:catalogue.png?w=700&tok=af570f', None],
dtype=object)
array(['/_media/web_interface:catalogue_item_name_options.png?w=250&tok=8e98fc',
None], dtype=object) ] | docs.msupply.org.nz |
Why is my Azure subscription disabled and how do I reactivate it?
You might have your Azure subscription disabled because your credit is expired, you reached your spending limit, have an overdue bill, hit your credit card limit, or because the subscription was canceled by the Account Administrator. See what issue applies to you and follow the steps in this article to get your subscription reactivated. to a Pay-As-You-Go.
Note
If you have a Free Trial subscription and you remove the spending limit, your subscription converts to Pay-As-You-Go at the end of the Free Trial. You keep your remaining credit for the full 30 days after you created the subscription. You also have access to free services for 12 months.
To monitor and manage billing activity for Azure, see Prevent unexpected costs with Azure billing and cost management.
Your bill is past due
To resolve past due balance, see Resolve past due balance for your Azure subscription after getting an email from Azure.
The bill exceeds your credit card limit
To resolve this issue, switch to a different credit card. Or if you're representing a business, you can switch to pay by invoice.
The subscription was accidentally canceled and you want to reactivate
If you're the Account Administrator and accidentally canceled a Pay-As-You-Go subscription, you can reactivate it in the Account Center.
- Select the canceled subscription.
Click Reactivate.
For other subscription types, contact support to have your subscription reactivated.
Need help? Contact us.
If you have questions or need help, create a support request. | https://docs.microsoft.com/en-us/azure/billing/billing-subscription-become-disable | 2018-12-10T04:26:29 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.microsoft.com |
This is a comprehensive list of all possible event fields appearing on MOREAL, some fields may not be populated depending on
context, vendor or available information.
Also, wherever Possible Values are listed as “n/a” that means that the field is usually a free-form text field or there are no preset values.
MOREAL Event Field documentation | https://docs.moreal.co/moreal-event-fields/ | 2018-12-10T03:57:00 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.moreal.co |
Managing outstanding purchase order lines (pipeline stock)
Stock that you have ordered and waiting to be delivered is called your pipeline stock. Sometimes it's helpful and important to monitor this stock so mSupply has functions to help you do that.
Pipeline stock is represented by all the outstanding purchase order lines i.e. purchase order lines on confirmed purchase orders which haven't been fully received into mSupply yet.
To see all your pipeline stock simply choose Supplier > Show outstanding PO lines… from the menus or click on the Outstanding icon in the Purchase orders section of the Suppliers tab on the navigator. This window will open:
All lines in black are those where the expected delivery date (EDD) is after the current date i.e. where the Days to EDD column (which contains the difference between the EDD for a line and the current date) has a number greater than 0. These are items which are not yet overdue but, if the Days to EDD column contains a small number, you might want to call the supplier to check on the delivery.
All lines in red are those where the expected delivery date is on or before the current date i.e. where the Days to EDD column contains 0. These items are overdue and it probably means that the supplier could do with a call to find out what's happening with your delivery.
The Adjusted quantity column shows the actual number of items you ordered (number of packs x the packsize).
The Qty received column shows the number of items (number of packs x the packsize) yo have already received into mSupply.
The Qty Outstanding column shows the remaining number of items on the order waiting to be received (Adjusted quantity - Qty received).
Double clicking on any line will open the Purchase Order with that line highlighted.
You can print the list of purchase order lines currently displayed in this window at any time by clicking on the Print button - as usual, you will be given the option of printing or exporting to Excel.
Update EDD button
If you speak with the supplier about a delivery or receive information from elsewhere about when goods are going to be delivered you can update the expected delivery date for those lines.
To do this, simply select the lines in the table that are affected then click on the Update EDD button. In the window that opens, enter the new EDD for the lines and click on the Update button. The EDD is immediately updated for the chosen lines and, if the lines were red, they will turn black.
You can select multiple lines to update using the usual Shift+click to select/deselect a range of lines and Ctrl+click (Cmd+click on Mac) to add/remove a line to/from what is currently selected.
Filtering the list
Sometimes the list of outstanding purchase order lines can be very long (just after running your annual tender for example, or when you have placed several large orders) and it can be hard to find a specific item or items expected from a particular supplier. To help in this situation, mSupply allows you to filter the displayed list.
To do this, select the type of filter you would like to apply by clicking on the filter icon (
) just to the left of the textbox and select one of:
- Search by supplier or code - will show only those outstanding purchase order lines on purchase orders whose supplier name or code begins with what you type in the textbox
- Item name or code - will show only those outstanding purchase order lines whose item name or code begins with what you type in the textbox
- Days to expected delivery is less than… - will show only those outstanding purchase order lines with an expected delivery date less than the number of days you enter in the text box from the current date.
Then enter the value you wish to filter by in the textbox and click on the Find button. The list will then be changed to show only those purchase order lines matching the filter you have selected.
Finalising purchase orders
Any outstanding goods on finalised purchase orders will NOT be included in this window. So, if there are goods that you have ordered but will never receive from a supplier (maybe a substitute item has been shipped or you cancelled part of an order because a supplier said they couldn't deliver it), when everything else on the purchase order has been received you should finalise it. The goods on the purchase order that you have not received will then no longer be shown as outstanding in this window.
It is good practice to finalise purchase orders for which you have received everything because it also removes the purchase order from the list you can create a goods received note from (see Receiving goods (goods receipt function)), making it easier to find a purchase order you're wanting to create a goods received note for, and means that no changes can be made to the purchase order in the future.
But beware: don't finalise a purchase order before you have received everything the supplier is going to send because you won't be able to make any changes to it or receive goods against it using the goods receipt function (Receiving goods (goods receipt function)).
Previous: Editing Pack Sizes Next: Managing Donors | http://docs.msupply.org.nz/receiving_goods:show_outstanding_purchase_order_lines | 2018-12-10T04:37:19 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.msupply.org.nz |
The following numbers are used by the system to internally describe the various states. They are used by EMCMD, and they are also the state numbers found in event log entries.
-1: Invalid State
0: No Mirror
1: Mirroring
2: Mirror is resyncing
3: Mirror is broken
4: Mirror is paused
5: Resync is pending
Feedback
Thanks for your feedback.
Post your comment on this topic. | http://docs.us.sios.com/dkse/8.6.2/en/topic/mirror-state-definitions | 2018-12-10T05:04:01 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.us.sios.com |
Package report
Overview ▹
Overview ▾
Package report generates human-readable benchmark reports.
Index ▹
Index ▾
Package files
doc.go report.go timeseries.go weighted Percentiles ¶
func Percentiles(nums []float64) (pcs []float64, data []float64)
Percentiles returns percentile distribution of float64 slice.
type DataPoint ¶
type DataPoint struct { Timestamp int64 MinLatency time.Duration AvgLatency time.Duration MaxLatency time.Duration ThroughPut int64 }
type Report ¶
Report processes a result stream until it is closed, then produces a string with information about the consumed result data.
type Report interface { Results() chan<- Result // Run returns results in print-friendly format. Run() <-chan string // Stats returns results in raw data. Stats() <-chan Stats }
func NewReport ¶
func NewReport(precision string) Report
func NewReportRate ¶
func NewReportRate(precision string) Report
func NewReportSample ¶
func NewReportSample(precision string) Report
func NewWeightedReport ¶
func NewWeightedReport(r Report, precision string) Report
NewWeightedReport returns a report that includes both weighted and unweighted statistics.
type Result ¶
Result describes the timings for an operation.
type Result struct { Start time.Time End time.Time Err error Weight float64 }
func (*Result) Duration ¶
func (res *Result) Duration() time.Duration
type Stats ¶
Stats exposes results raw data.
type Stats struct { AvgTotal float64 Fastest float64 Slowest float64 Average float64 Stddev float64 RPS float64 Total time.Duration ErrorDist map[string]int Lats []float64 TimeSeries TimeSeries }
type TimeSeries ¶
type TimeSeries []DataPoint
func (TimeSeries) Len ¶
func (t TimeSeries) Len() int
func (TimeSeries) Less ¶
func (t TimeSeries) Less(i, j int) bool
func (TimeSeries) String ¶
func (ts TimeSeries) String() string
func (TimeSeries) Swap ¶
func (t TimeSeries) Swap(i, j int) | http://docs.activestate.com/activego/1.8/pkg/github.com/coreos/etcd/pkg/report/ | 2018-12-10T04:33:05 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.activestate.com |
Locations and location types
Keeping track of where items are in your store is an important part of good warehouse practice. You don't want to have to go hunting through your whole store for an item, wasting time and energy when mSupply can tell you exactly where it is!
Locations in mSupply are the places you store items. Locations can have types (e.g. normal, cold, bulk…) to help you categorise them, you can define parent/child relationships between locations to help you manage them effectively and you can even create a line drawing plan of the location to pictorially show the layout of your store.
If you use location types it will help you if they are defined before the locations that belong to them otherwise they won't be available to select when defining a location - and you'll have to go back later and edit the locations. What a waste of time! So this section explains location types first and goes on to explain about locations.
Location types
Location types give you the ability to categorise your locations. They can be used for reporting on a group of locations, but also to restrict the locations that can be used for a particular item. For setting the location type for an item, see Item basics
Choose Item > Show location types to define or show a list of available location types.
The window that appears allows you to define the criteria for the various types of location in your store - e.g. the permitted temperature range, whether location must be dark, etc.
Adding a location type
Click on New , and the window that appears allows you to enter a name for the storage type, and the permitted conditions pertaining to that storage type.:
In the above example, a storage type “Refrigerator” has been defined, the permitted range of temperature being 2o C - 8o C.
Having defined your storage types, the Show location types window might look like this:
It is worth emphasising that the list does not show the actual store locations, but the types of locations.
Editing a location type
Should you wish to edit the details of any location type, double click on it in the list and change the details in the window which appears.
Locations
Viewing locations
To view the locations you have defined choose Item > Show locations and you will see a list of location codes and descriptions:
From this window you can view, edit and remove locations and their details - this is the 'location management' window. Here are the various functions of the window:
New location icon: Click this to add a location (see 'Adding a location' below).
Print icon: Click this to print the list of locations displayed in the window (see 'Printing the locations list' below).
Show warehouse icon: Click this to see a graphical representation of your warehouse. This representation shows all the layouts of the individual locations that you have created in the layout tab when adding a location (see 'Adding a location' below).
Search location: Enter some text in the text field and the list is updated as you type to show only the locations whose code or description starts with the text you have entered.
view and edit a location's details: Double click on a location in the list - see 'Viewing and editing a location' below. All the details are editable. See the 'Adding a location' section below for the meaning of the individual details.
Adding a location
Before you can associate an item with a specific location (e.g. Shelf D4, Refrigerator 2, etc.) you must define the locations in mSupply. To add a new location, click on the New location icon in the View locations window shown above. You will be shown the following window, where you can enter the details of the new location:
Code: This is how you refer to the location in mSupply and, for example, what you will select when you set an item's location.
Description: This is a description of the location e.g. “Top shelf of refrigerator 3” or “3rd shelf up in rack E” to help you identify it or remember something important about it. You will only see this in the list of locations shown above.
General tab
Under this tab, you enter the location's main details:
Location Type: Select one from the location types you have already entered (see Location types section above).
- Each item can have a Restricted to Location type set, and then you will only be able to store that item in a location with that type:
- You can set the location type for an item by viewing the item's general tab, and choosing the type from the drop-down list (Items/Show Items/Find Items - double click applicable item):
Parent: Select one from the locations already entered in mSupply. This is the location to which the location you are adding belongs. This is for descriptive purposes and does not have any functional effect in mSupply, except when viewing the warehouse layout.
Summary: Checking this means that the location is a summary location only and cannot be used for storing items. This is normally checked for all locations that are parents of others.
Comment: You can note anything you need to remember or indicate to others in here. It is only visible if you view the location's details (Item > Show locations, double click on the item in the list) later.
Total volume: The total volume of goods that you can store at the location. Volumes are stored in cubic metres [m3] but other volumes e.g. litres (l) may also be entered, provided the appropriate unit is entered following the number e.g. 5l for 5 litres. See the entry Volume per pack in Item edit - General options. Note that whatever you enter will be converted to and displayed in m3.
Knowing the volume of a location is important if, for example, you are replenishing your stock of vaccines, and you need to know if there is enough space available in the refrigerator in which you store vaccines to accommodate a new order (obviously, you would also need to know the volume of the vaccines that you are ordering).
Priority: This is used when printing a picking slip. Setting a priority for a location will override the default alphabetical ordering of shelf locations in a picking list. A location with a lower priority number will be printed before a location with a higher priority number. All locations with 0 priority are counted as having no priority and will be printed, in alphabetical order, after all locations with a priority.
Hold: If this is checked then goods in this location cannot be issued to customers. Goods can be put into the location but they cannot be issued from that location. This is particularly useful if:
- The stock needs to be kept from being issued until some inspection / approval (e.g. quarantine).
- The stock is a bulk quantity with the same expiry date as another stock line in another location from which you want stock issued. You can use this feature to force mSupply to always suggest issuing stock of this item from the 'issue' location rather than this 'bulk' location.
If you want to make the stock in an On hold location available for issue, then there are two options:
Move the stock in that location to another location that is not On hold Remove the On hold status of the location here.
Layout tab
Under this tab you can create a graphical plan view of the location in your store. This is useful for helping people to quickly locate any given location and presenting a graphical layout of your whole store. Locations are drawn as either rectangles (for which you enter the top left coordinate and the lengths of the 2 sides) or polygons (for which you enter a number of sequential coordinates which are connected with straight lines). Here's what the various input items mean for a rectangle, the rest we'll show you by the way of an example:
So, as an example, the coordinates are entered as above in the appropriate boxes, then the Draw button is clicked to produce the following display under the layout tab:
This has created a picture of location main1. This is the whole store or warehouse. You can't see the settings but this location will have no parent and will have its Summary checkbox checked (no items can be located here - it's just a summary location for descriptive purposes).
In our imaginary warehouse we have a set of open racking which is 'L'-shaped. We want to draw it in the warehouse so we create the location, call it 'sub1' and set its parent as Main1. If this set of shelves also has other locations in it we would also check its Summary checkbox.
To draw this location we click on the Layout tab and select Polygon as the object type. Click on the Add button to add a coordinate and then overwrite the zeros in the X and Y columns to give the correct coordinates. If you make a mistake, click on the set of coordinates in the list that is wrong and click on the Delete button to delete it. When all six co-ordinates have been entered, click on the Draw button to produce the layout displayed below:
You can do this same thing for all locations so that anyone can easily locate them in your store.
Viewing and Editing a location
As you already know from above, to view all the locations you have defined select Item > Show locations. To view and edit the details of a particular location, double click on that location in this list. You will be shown the following window:
General tab
This is the same as the General tab for adding a location (see the 'Adding a location' section above) except that its details are filled in with the details of the location you selected. To edit the details simply overwrite the current value with a new value or select another option as appropriate.
Layout tab
This is the same as the Layout tab for adding a location (see the 'Adding a location' section above) except that the current graphical representation of the location is displayed (if you've already created one). You can edit the plan view of the location if required by changing, adding or deleting co-ordinates.
Stock Tab
The Stock tab shows a list of existing stock lines stored in that particular location. A lot of information regarding the stock is displayed in the list and, as with most mSupply lists, it can be sorted on any column by clicking on the column heading:
If you want to know more information about any particular batch in the list, simply double click it and you'll be shown another window with lots of information about the batch, arranged in four tabs:
Deleting a location
To delete location you select Item > Show locations to view the list of locations, double click on the location you want to delete (as if you wanted to view all its details) and then click on the Delete button at the bottom of the window. If you confirm the deletion, the location is removed.
Merging two locations
If you want to remove a location from further use in mSupply (for example, you might have accidentally double-entered a location) this command can be used.
When you Choose Item > Merge two locations , this window is shown:
Use extreme caution! This operation will affect all historical records of the location you delete. They will be moved to the location you are keeping. The operation can only be undone by reverting to a backup copy of your data file.
In the window displayed enter the location to keep, and then the location to merge. When you have checked that the information is correct, click the OK button.
Previous: Item master lists Next: Manufactured items | http://docs.msupply.org.nz/items:item_locations | 2018-12-10T04:11:40 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.msupply.org.nz |
Dashboard¶
The dashboard is the backend interface for managing the store. That includes the
product catalogue, orders and stock, offers etc. It is intended as a
complete replacement of the Django admin interface.
The app itself only contains a view that serves as a kind of homepage, and
some logic for managing the navigation (in
nav.py). There’s several sub-apps
that are responsible for managing the different parts of the Oscar store.
Permission-based dashboard¶
Staff users (users with
is_staff==True) get access to all views in the
dashboard. To better support Oscar’s use for marketplace scenarios, the
permission-based dashboard has been introduced. If a non-staff user has
the
partner.dashboard_access permission set, they are given access to a subset
of views, and their access to products and orders is limited.
AbstractPartner instances
have a
users field.
Prior to Oscar 0.6, this field was not used. Since Oscar 0.6, it is used solely
for modelling dashboard access.
If a non-staff user with the
partner.dashboard_access permission is in
users, they can:
- Create products. It is enforced that at least one stock record’s partner has the current user in
users.
- Update products. At least one stock record must have the user in the stock record’s partner’s
users.
- Delete and list products. Limited to products the user is allowed to update.
- Managing orders. Similar to products, a user get access if one of an order’s lines is associated with a matching partner. By default, user will get access to all lines of the order, even though supplies only one of them. If you need user to see only own lines or apply additional filtering - you can customize
get_order_lines()method.
For many marketplace scenarios, it will make sense to ensure at checkout that a basket only contains lines from one partner. Please note that the dashboard currently ignores any other permissions, including Django’s default permissions.
Note
The permission-based dashboard currently does not support parent or child products. Supporting this requires a modelling change. If you require this, please get in touch so we can first learn about your use case.
Views¶
- class
oscar.apps.dashboard.views.
IndexView(**kwargs)[source]¶
An overview view which displays several reports about the shop.
Supports the permission-based dashboard. It is recommended to add a index_nonstaff.html template because Oscar’s default template will display potentially sensitive store information.
get_active_site_offers()[source]¶
Return active conditional offers of type “site offer”. The returned
Querysetof site offers is filtered by end date greater then the current date.
get_active_vouchers()[source]¶
Get all active vouchers. The returned
Querysetof vouchers is filtered by end date greater then the current date.
get_hourly_report(hours=24, segments=10)[source]¶
Get report of order revenue split up in hourly chunks. A report is generated for the last hours (default=24) from the current time. The report provides
max_revenueof the hourly order revenue sum,
y-rangeas the labeling for the y-axis in a template and
order_total_hourly, a list of properties for hourly chunks. segments defines the number of labeling segments used for the y-axis when generating the y-axis labels (default=10).
get_number_of_promotions(abstract_base=<class 'oscar.apps.promotions.models.AbstractPromotion'>)[source]¶
Get the number of promotions for all promotions derived from abstract_base. All subclasses of abstract_base are queried and if another abstract base class is found this method is executed recursively.
get_open_baskets(filters=None)[source]¶
Get all open baskets. If filters dictionary is provided they will be applied on all open baskets and return only filtered results. | http://docs.oscarcommerce.com/en/latest/ref/apps/dashboard.html | 2018-12-10T04:46:41 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.oscarcommerce.com |
_121 export JAVA_HOME
Windows
set JAVA_HOME="C:\Program Files\Java\jdk1.8.0_121" Uninstalling GemFire. | http://gemfire.docs.pivotal.io/96/gemfire/getting_started/installation/install_standalone.html | 2018-12-10T03:48:03 | CC-MAIN-2018-51 | 1544376823303.28 | [] | gemfire.docs.pivotal.io |
Stripe - Test Purchases
The Stripe Payment gateway referenced on this page allows you to connect your Stripe.com account to Easy Digital Downloads. Learn more at the main Easy Digital Downloads website.
In order to test the Stripe payment gateway, you will need to verify your Stripe account is in 'test' mode, and you will need to connect Easy Digital Downloads to your Stripe account.
EDD Test Mode
Navigate to Downloads > Settings > Payment Gateways and enable Test Mode.
EDD Stripe Settings
From within Easy Digital Downloads Payment Gateway settings area, enable the Stripe option. If Stripe does not show up under Payment Gateways, this means the Stripe extension has not been installed and activated. Connect Easy Digital Downloads to your Stripe account using the Connect with Stripe button.
Now in test mode, add a product to your cart and proceed to checkout.
You can use the card number 4242424242424242 with any CVC and a valid expiration date (any date in the future).
| https://docs.easydigitaldownloads.com/article/406-testing-stripe-gateway | 2018-12-10T04:17:26 | CC-MAIN-2018-51 | 1544376823303.28 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/5af349c52c7d3a3f981f6bfd/file-OE6bbCthUc.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/5624f54e903360610fc693e4/file-4cbjh3mcBg.png',
None], dtype=object) ] | docs.easydigitaldownloads.com |
Administrators.
To specify that all applications submitted by a specific user are submitted to a specific queue, use the following mapping assignment:
u:user1:queueA
This defines a mapping assignment for applications submitted by the "user1" user to be submitted to queue "queueA" by default.
To specify that all applications submitted by a specific group of users are submitted to a specific queue, use the following mapping assignment:
g:group1:queueB
This defines a mapping assignment for applications submitted by any user in the group "group1" to be submitted to queue "queueB" by default.
The Queue Mapping definition can consist of multiple assignments, in order of priority.
Consider the following example:
<property> <name>yarn.scheduler.capacity.queue-mappings</name> <value>u:maria:engineering,g:webadmins:weblog</value> </property>
In this example there are two queue mapping assignments. The
u:maria:engineering mapping will be respected first, which means all
applications submitted by the user "maria" will be submitted to the "engineering" queue .
The
g:webadmins:weblog mapping will be processed after the first mapping --
thus, even if user "maria" belongs to the "webadmins" group, applications submitted by
"maria" will still be submitted to the "engineering" queue.
To specify that all applications are submitted to the queue with the same name as a group, use this mapping assignment:
u:%user:%primary_group
Consider the following example configuration. On this cluster, there are two groups: "marketing" and "engineering". Each group has the following users:
In "marketing", there are 3 users: "angela", "rahul", and "dmitry".
In "engineering", there are 2 users: "maria" and "greg".
<property> <name>yarn.scheduler.capacity.queue-mappings</name> <value>u:%user:%primary_group</value> </property>
With this queue mapping, any application submitted by members of the "marketing" group -- "angela", "rahul", or "dmitry" -- will be submitted to the "marketing" queue. Any application submitted by members of the "engineering" group -- "maria" or "greg" -- will be submitted to the "engineering" queue.
To specify that all applications are submitted to the queue with the same name as a user, use this mapping assignment:
u:%user:%user
This requires that queues are set up with the same name as the users. With this queue mapping, applications submitted by user "greg" will be submitted to the queue "greg".
If configured, you can override default queue mappings and submit applications that are
specified for queues, other than those defined in the default queue mappings. Override
default queue mapping is disabled (set to
false) by default.
<property> <name>yarn.scheduler.capacity.queue-mappings-override.enable</name> <value>false</value> <description> If a queue mapping is present and override is set to true, it will override the queue value specified by the user. This can be used by administrators to place jobs in queues that are different than the one specified by the user. The default is false - user can specify to a non-default queue. </description> </property>
To enable queue mapping override, set the property to
true in the
capacity-scheduler.xml file.
Consider the following example in the case where queue mapping override has been enabled:
<property> <name>yarn.scheduler.capacity.queue-mappings</name> <value>u:maria:engineering,g:webadmins:weblog</value> </property>
If user "maria" explicitly submits an application to the "marketing" queue, the default queue assignment of "engineering" is overridden, and the application is submitted to the "marketing" queue. | https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_yarn-resource-management/content/default_queue_mapping_based_on_user_or_group.html | 2018-12-10T05:08:17 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.hortonworks.com |
NameNodes
Understand the HDFS metadata directory details taken from a NameNode.
The following example shows an HDFS metadata directory taken from a NameNode. This shows the output of running the tree command on the metadata directory, which is configured by setting dfs.namenode.name.dir in hdfs-site.xml.
data/dfs/name ├── current│ ├── VERSION│ ├── edits_0000000000000000001-0000000000000000007 │ ├── edits_0000000000000000008-0000000000000000015 │ ├── edits_0000000000000000016-0000000000000000022 │ ├── edits_0000000000000000023-0000000000000000029 │ ├── edits_0000000000000000030-0000000000000000030 │ ├── edits_0000000000000000031-0000000000000000031 │ ├── edits_inprogress_0000000000000000032 │ ├── fsimage_0000000000000000030 │ ├── fsimage_0000000000000000030.md5 │ ├── fsimage_0000000000000000031 │ ├── fsimage_0000000000000000031.md5 │ └── seen_txid └── in_use.lock
In this example, the same directory has been used for both
fsimage and
edits. Alternative configuration
options are available that allow separating
fsimage and
edits into different directories. Each file within this
directory serves a specific purpose in the overall scheme of metadata
persistence:
- VERSION
Text file that contains the following elements:
- layoutVersion
Version of the HDFS metadata format. When you add new features that require a change to the metadata format, you change this number. An HDFS upgrade is required when the current HDFS software uses a layout version that is newer than the current one.
- namespaceID/clusterID/blockpoolID
Unique identifiers of an HDFS cluster. These identifiers are used to prevent DataNodes from registering accidentally with an incorrect NameNode that is part of a different cluster. These identifiers also are particularly important in a federated deployment. Within a federated deployment, there are multiple NameNodes working independently. Each NameNode serves a unique portion of the namespace (
namespaceID) and manages a unique set of blocks (
blockpoolID). The
clusterIDties the whole cluster together as a single logical unit. This structure is the same across all nodes in the cluster.
- storageType
Always
NAME_NODEfor the NameNode, and never
JOURNAL_NODE.
- cTime
Creation time of file system state. This field is updated during HDFS upgrades.
- edits_start transaction ID-end transaction ID
Finalized and unmodifiable edit log segments. Each of these files contains all of the edit log transactions in the range defined by the file name. In an High Availability deployment, the standby can only read up through the finalized log segments. The standby NameNode is not up-to-date with the current edit log in progress. When an HA failover happens, the failover finalizes the current log segment so that it is completely caught up before switching to active.
- fsimage_end transaction ID
Contains the complete metadata image up through . Each
fsimagefile also has a corresponding .md5 file containing a MD5 checksum, which HDFS uses to guard against disk corruption.
- seen_txid
Contains the last transaction ID of the last checkpoint (merge of
editsinto an
fsimage) or edit log roll (finalization of current
edits_inprogressand creation of a new one). This is not the last transaction ID accepted by the NameNode. The file is not updated on every transaction, only on a checkpoint or an edit log roll. The purpose of this file is to try to identify if
editsare missing during startup. It is possible to configure the NameNode to use separate directories for
fsimageand
editsfiles. If the
editsdirectory accidentally gets deleted, then all transactions since the last checkpoint would go away, and the NameNode starts up using just
fsimageat an old state. To guard against this, NameNode startup also checks
seen_txidto verify that it can load transactions at least up through that number. It aborts startup if it cannot verify the load transactions.
- in_use.lock
Lock file held by the NameNode process, used to prevent multiple NameNode processes from starting up and concurrently modifying the directory. | https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.0/fault-tolerance/content/namenodes.html | 2018-12-10T05:10:11 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.hortonworks.com |
Public Python API¶
Flake8 3.0.0 presently does not have a public, stable Python API.
When it does it will be located in
flake8.api and that will
be documented here.
Legacy API¶
When Flake8 broke it’s hard dependency on the tricky internals of
pycodestyle, it lost the easy backwards compatibility as well. To help
existing users of that API we have
flake8.api.legacy. This module
includes a couple classes (which are documented below) and a function.
The main usage that the developers of Flake8 observed was using the
get_style_guide() function and then calling
check_files(). To a lesser extent,
people also seemed to use the
get_statistics()
method on what
check_files returns. We then sought to preserve that
API in this module.
Let’s look at an example piece of code together:
from flake8.api import legacy as flake8 style_guide = flake8.get_style_guide(ignore=['E24', 'W503']) report = style_guide.check_files([...]) assert report.get_statistics('E') == [], 'Flake8 found violations'
This represents the basic universal usage of all existing Flake8 2.x integrations. Each example we found was obviously slightly different, but this is kind of the gist, so let’s walk through this.
Everything that is backwards compatible for our API is in the
flake8.api.legacy submodule. This is to indicate, clearly, that
the old API is being used.
We create a
flake8.api.legacy.StyleGuide by calling
flake8.api.legacy.get_style_guide(). We can pass options
to
flake8.api.legacy.get_style_guide() that correspond to the command-line options one might use.
For example, we can pass
ignore,
exclude,
format, etc.
Our legacy API, does not enforce legacy behaviour, so we can combine
ignore and
select like we might on the command-line, e.g.,
style_guide = flake8.get_style_guide( ignore=['E24', 'W5'], select=['E', 'W', 'F'], format='pylint', )
Once we have our
flake8.api.legacy.StyleGuide we can use the same methods that we used before,
namely
StyleGuide.
check_files(paths=None)¶
Run collected checks on the files provided.
This will check the files passed in and return a
Reportinstance.
StyleGuide.
input_file(filename, lines=None, expected=None, line_offset=0)¶
Run collected checks on a single file.
This will check the file passed in and return a
Reportinstance.
Warning
These are not perfectly backwards compatible. Not all arguments are respsected, and some of the types necessary for something to work have changed.
Most people, we observed, were using
check_files(). You can use this to specify
a list of filenames or directories to check. In Flake8 3.0, however, we
return a different object that has similar methods. We return a
flake8.api.legacy.Report which
has the method
Most usage of this method that we noted was as documented above. Keep in mind, however, that it provides a list of strings and not anything more maleable.
Autogenerated Legacy Documentation¶
Module containing shims around Flake8 2.x behaviour.
Previously, users would import
get_style_guide() from
flake8.engine.
In 3.0 we no longer have an “engine” module but we maintain the API from it.
- class
flake8.api.legacy.
StyleGuide(application)¶
Public facing object that mimic’s Flake8 2.0’s StyleGuide.
Note
There are important changes in how this object behaves compared to the StyleGuide object provided in Flake8 2.x.
Warning
This object should not be instantiated directly by users.
Changed in version 3.0.0.
- class
flake8.api.legacy.
Report(application)¶
Public facing object that mimic’s Flake8 2.0’s API.
Note
There are important changes in how this object behaves compared to the object provided in Flake8 2.x.
Warning
This should not be instantiated by users.
Changed in version 3.0.0. | https://flake8.readthedocs.io/en/3.3.0/user/python-api.html | 2018-12-10T04:58:06 | CC-MAIN-2018-51 | 1544376823303.28 | [] | flake8.readthedocs.io |
bag.git.delete_old_branches module¶
A solution to the problem of cleaning old git branches.
A command that removes git branches that have been merged onto the current branch.
Usage:
# Try this to see the supported arguments: delete_old_branches --help # Ensure you are in the branch "master" before starting. # Test first, by printing the names of the branches that would be deleted: delete_old_branches -l -r origin -y 0 --dry # If you agree, run the command again without --dry: delete_old_branches -l -r origin -y 0
If you don’t like the 2 steps, just omit
-y and the command will confirm
each branch with you before deleting it.
The zero in the above example is a number of days since the branch was merged into the branch you are in.
Don’t forget to do a “git fetch –all –prune” on other machines after deleting remote branches. Other machines may still have obsolete tracking branches (see them with “git branch -a”).
- class
bag.git.delete_old_branches.
Branch(name, remote='')[source]¶
Bases:
object
bag.git.delete_old_branches.
delete_old_branches(days, dry=False, locally=False, remote=None, y=False, ignore=['develop', 'master'])[source]¶ | http://docs.nando.audio/bag/latest/api/bag.git.delete_old_branches.html | 2018-12-10T05:28:28 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.nando.audio |
Get-SPClaim
Type Encoding
Syntax
Get-SPClaimTypeEncoding [-AssignmentCollection <SPAssignmentCollection>] [-ClaimType <String>] [-EncodingCharacter <Char>] [<CommonParameters>]
Description M:Microsoft.SharePoint.Administration.Claims.SPClaim.ToEncodedString and P:Microsoft.SharePoint.Administration.Claims.SPClaim.ClaimType respectively.
For permissions and the most current information about Windows PowerShell for SharePoint Products, see the online documentation at SharePoint Server Cmdlets.
Examples
--------------EXAMPLE 1--------
Get-SPClaimTypeEncoding
This example returns a list of all types of claima in the farm.
--------------EXAMPLE 2--------
Get-SPClaimTypeEncoding -ClaimType ""
This example returns a specific claim type by using the ClaimType an encoding character that is mapped to a type of input claim.
Specifies a type of claim that is mapped to an input character.
Related Links
Feedback | https://docs.microsoft.com/en-us/powershell/module/sharepoint-server/Get-SPClaimTypeEncoding?view=sharepoint-ps | 2019-09-15T16:35:21 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.microsoft.com |
.
Related Topics
Saving Data in Datasets
Provides an overview of how changes are made in a dataset and how the dataset tracks information about changes in order to save those changes to a database.
Saving Entity Data
Describes how to save changes in ADO.NET Entity Framework and WCF Data Services applications. | https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2012/ms171932%28v%3Dvs.110%29 | 2019-09-15T17:18:57 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.microsoft.com |
The navigation panel displays the sections of the console that the user is allowed to access, depending on user roles and privileges.
Note
The Administration tab is only visible to users with the administrator role. For more information about the Administration section, see Administering.
After you click a tab, its contents are displayed in a tree structure in the navigation panel.
In the Navigation pane, if you click a node under a tab, and then switch to another tab, the node that you accessed first is bookmarked.This feature allows you to quickly move between nodes belonging to different tabs.
To reset the navigation to the original status, click BMC TrueSight Capacity Optimization in the upper left corner of the console.
A majority of the tasks are performed in the console. At the top of the console, the Breadcrumb bar displays information about the current session. It keeps track of the user's location in the console; each part of the breadcrumb trail provides a link back to a parent page in the hierarchical structure. The Description section on the right presents the main properties of the object selected in the navigation panel. Depending on the selected object, different tools and links are shown in the Tools and Links bar.
The top-right corner of the TrueSight Capacity Optimization console displays the user name that is logged on to the application and contains the following links:
Help: Enables the user to access the online Help for the application
Tip
To view context sensitive help for the section of the screen that you are viewing, click Helpfrom the top of the console. Click to view the home page of the product documentation.
The Welcome page displays a notification area that lists the failures, warnings, and errors.
The following list describes the various elements and their use:
ERRfilter is applied.
A notification iconalerts you to diagnostics messages, user activity status and so on, and is displayed on the top right of the screen at all times. The number of alerts are also denoted. Click the icon to display the notification pop-up list. The following image is an example of the pop-up list.
The following list describes the various elements and their use:
The Welcome page of the Home tab displays saved bookmarks under Recent bookmarks. The bookmarks are grouped as Private bookmarks and Public bookmarks. Private bookmarks are based on user login. Public bookmarks are visible to all users. A maximum of 10 bookmarks are displayed for each group, in a descending order of when they were accessed, with the most recently accessed bookmark listed on top.
Example of Recent bookmarks on the Welcome page:
For more information, see Bookmarks.
Note
If you have not saved any bookmarks, the Recent bookmarks list is not displayed.
At the top of the console, the Description section shows a brief description and properties of the object selected in the navigation panel. You can view and edit an object's associated tags; for more information, see Working with tags.
Tip
TrueSight Capacity Optimization objects are identified by a unique ID, which you can see by placing the cursor over the object's icon.
By default, only a subset of all available properties is shown on the page.To display all the properties, click the Show/hide details control to expand the box.
Command buttons such as Edit, Delete, and so on are used to perform some basic actions. Navigation links such as Systems, Business Drivers, and so on direct you to different sections.
To access your user profile page, click Home in the top bar of the console, and in the Navigation panel, select My profile.
You can use the profile page to change your password, edit your preferences, and view your user roles and access groups.
For more information about modifying your profile settings, see Modifying profile settings. | https://docs.bmc.com/docs/display/btco107/Getting+started+with+the+TrueSight+Capacity+Optimization+console | 2019-09-15T17:27:39 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.bmc.com |
Can I receive Brekeke's product in DVD (or CD) format?
No, we don’t ship our products in DVD nor CD media format.
All of our products are delivered to our customers electronically via email. Product’s binary files are available on our website for you to download.
See also:
Get software (binary file) for purchased products | https://docs.brekeke.com/sales/can-i-receive-brekekes-product-in-dvd-or-cd-format | 2019-09-15T16:46:59 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.brekeke.com |
When you install a Cloud Probe, you must enter the following Collector connection details:
In the Cloud Probe Maintenance Tool , you can change the configuration of the Collector to which the Cloud Probe is connected.
The Cloud Probe starts monitoring automatically after installation. You can monitor the connection status of the Cloud Probe instances and the last time that data was sent to the Collector.
The following topics provide information about, and instructions for, installing the Real User Cloud Probe: | https://docs.bmc.com/docs/display/TSOMD107/Installing+the+Cloud+Probe | 2019-09-15T17:15:57 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.bmc.com |
Contents Security Operations Previous Topic Next Topic Automatic lookup of suspicious emails for threats Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Automatic lookup of suspicious emails for threats Threat Intelligence allows you to automatically handle the checking of suspicious emails for malware. Before you begin Role required: admin About this taskThe first step is to provide the email address that users are instructed to forward their suspicious emails to. By setting up an email address for your users to forward suspicious emails to, the emails are automatically sent to the lookup source, and IP addresses and URLs are parsed and validated. Security incidents can be created to follow up on any emails with attached malware or links to known bad websites. Regardless of the results, a reply email is sent to the requester with the results of the lookup. lookup purposes. Click Update. A lookup request is created to lookup the files attached to the email. If the lookup results in the discovery of malware, a security incident can be created. Either way, a reply email is sent to the requester with the results of the lookup. Related tasksSubmit an IoC Lookup request with Threat IntelligenceSubmit an IoC Lookup request from the Security Incident CatalogSubmit an IoC Lookup request from a security incidentView the lookup queueView lookup resultsRelated conceptsIoC Lookup email notifications On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/istanbul-security-management/page/product/threat-intelligence/task/t_ConfigureScanEmailInboundAction.html | 2019-09-15T16:50:04 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.servicenow.com |
Contents Now Platform Custom Business Applications Previous Topic Next Topic Scripted REST API versioning Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Scripted REST API versioning Scripted REST APIs may be versioned, allowing you to test and deploy changes without impacting existing integrations. Enable versioning By default, new scripted REST APIs are not versioned. To enable versioning, click Enable versioning on the Scripted REST API form.Note: To continue supporting non-versioned URLs after enabling versioning, select a version as the default version. Default version A version may be marked as default. Specifying a default version allows users to access that version using a web service URL without a version number. If more than one active version is marked as default, the latest default version is used as the default. Add a version To add a new version to a scripted REST service, click Add new version on the Scripted REST API form. When you add a new version, you can copy resources from an existing version. Related tasksEnable versioning for a scripted REST APIRelated referenceScripted REST API URIs On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/jakarta-application-development/page/integrate/custom-web-services/concept/c_CustomWebServiceVersions.html | 2019-09-15T16:55:32 | CC-MAIN-2019-39 | 1568514571651.9 | [] | docs.servicenow.com |
GetFilter
Returns the details of the filter specified by the filter name.
Request Syntax
GET /detector/
detectorId/filter/
filterNameHTTP/1.1
URI Request Parameters
The request uses the following URI parameters.
- detectorId
The unique ID of the detector that the filter is associated with.
Length Constraints: Minimum length of 1. Maximum length of 300.
Required: Yes
- filterName
The name of the filter you want to get.
Required: Yes
Request Body
The request does not have a request body.
Response Syntax
HTTP/1.1 200 Content-type: application/json { "action": "string", "description": "string", "findingCriteria": { "criterion": { "string" : { "eq": [ "string" ], "equals": [ "string" ], "greaterThan": number, "greaterThanOrEqual": number, "gt": number, "gte": number, "lessThan": number, "lessThanOrEqual": number, "lt": number, "lte": number, "neq": [ "string" ], "notEquals": [ "string" ] } } }, "name": "string", "rank": number, "tags": { "string" : "string" } }
Response Elements
If the action is successful, the service sends back an HTTP 200 response.
The following data is returned in JSON format by the service.
- action
Specifies the action that is to be applied to the findings that match the filter.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 300.
Valid Values:
NOOP | ARCHIVE
- description
The description of the filter.
Type: String
Length Constraints: Minimum length of 0. Maximum length of 512.
- findingCriteria
Represents the criteria to be used in the filter for querying findings.
Type: FindingCriteria object
- name
The name of the filter.
Type: String
Length Constraints: Minimum length of 3. Maximum length of 64.
- rank
Specifies the position of the filter in the list of current filters. Also specifies the order in which this filter is applied to the findings.
Type: Integer
Valid Range: Minimum value of 1. Maximum value of 100.
The tags of the filter resource.
Type: String to string map
Map Entries: Maximum number of 200 items.
Key Length Constraints: Minimum length of 1. Maximum length of 128.
Key Pattern:
^(?!aws:)[a-zA-Z+-=._:/]+$
Value Length Constraints: Maximum length of 256. AWS SDKs, see the following: | https://docs.aws.amazon.com/guardduty/latest/APIReference/API_GetFilter.html | 2022-08-08T05:53:41 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.aws.amazon.com |
Workflow.
1. Creating a model scenario#
To link together our observations to our ancillary data we can create a
ModelScenario object, as shown in the previous tutorial 2_Comparing_with_emissions.ipynb, using suitable keywords to grab the data from the object store.
from openghg.analyse import ModelScenario scenario = ModelScenario(site="TAC", inlet="185m", domain="EUROPE", species="co2", source="natural", start_date="2017-07-01", end_date="2017-07-07")}'")
2.) | https://docs.openghg.org/tutorials/cloud/3_Working_with_co2.html | 2022-08-08T04:16:37 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.openghg.org |
Alert Queues
How Alert Queues Work
A typical alert workflow:
- When a rule flags a transaction event, it generates an alert.
- Each alert is then sent to a designated alert queue.
- An agent investigates an alert in his team alert queue.
- An agent resolves the alert with a workflow button such as
Close Alert.
Alert queues are simply a group of related alerts. Alerts in a queue can be investigated by agents in the team's queue.
Queues essentially triage alerts and help streamline the review process by agents.
Administrators can create alert queues and determine which team work on which queues.
Default Queue
There is a default queue for all alerts that aren’t configured to route to any other alert queue.
- The default queue cannot be deleted.
- All agents can view the default queue.
Alerts Queues in your Dashboard
To explore alerts and queues, head to the Alerts page on your dashboard:
This pane is organized into three tabs:
My Alerts
The My Alerts tab shows all alerts assigned to you.
From here, you can select the next alert to investigate. You can also get more alerts by clicking the Get More Alerts button.
Queues
The Queues tab shows all alert queues in the environment. In this tab, you can select a queue, and view all open and closed alerts within it.
You can also edit queues, to do things like add teams or rules to the queue, or configure the order in which agents investigate queues.
A rule can only be associated with one (1) queue. Adding a rule to a queue will disassociate it from any other queue.
I can't see the Queues tab!
The Queues tab can only be seen by administrators and agents with special permissions.
Admin
The Admin tab shows all alerts.
This tab is where you can run bulk actions on alerts. For example, you can reassign a group of alerts, change their queue, or mass resolve them.
You can also use the filter to find specific alerts.
I can't see the Admin tab!
The Admin tab can only be seen by administrators and agents with special permissions.
External Alerts
External alerts sent via the API (by your developers) will either go to a default queue or to the specific queue designated by your developer in code.
Updated 3 months ago | https://docs.unit21.ai/u21/docs/alerts-queues | 2022-08-08T03:32:21 | CC-MAIN-2022-33 | 1659882570765.6 | [array(['https://files.readme.io/ba8df2a-Unit21-Queues.jpg',
'Unit21-Queues.jpg 1600'], dtype=object)
array(['https://files.readme.io/ba8df2a-Unit21-Queues.jpg',
'Click to close... 1600'], dtype=object)
array(['https://files.readme.io/3c7ebeb-Unit21-Alerts-Tab-3.png',
'Unit21-Alerts-Tab-3.png 5344'], dtype=object)
array(['https://files.readme.io/3c7ebeb-Unit21-Alerts-Tab-3.png',
'Click to close... 5344'], dtype=object)
array(['https://files.readme.io/8801305-Unit21-Alerts-Tab-0.png',
'Unit21-Alerts-Tab-0.png 5344'], dtype=object)
array(['https://files.readme.io/8801305-Unit21-Alerts-Tab-0.png',
'Click to close... 5344'], dtype=object)
array(['https://files.readme.io/20da063-Unit21-Alerts-Tab-1.png',
'Unit21-Alerts-Tab-1.png 5344'], dtype=object)
array(['https://files.readme.io/20da063-Unit21-Alerts-Tab-1.png',
'Click to close... 5344'], dtype=object)
array(['https://files.readme.io/9d8c960-Unit21-Alerts-Tab-2.png',
'Unit21-Alerts-Tab-2.png 5344'], dtype=object)
array(['https://files.readme.io/9d8c960-Unit21-Alerts-Tab-2.png',
'Click to close... 5344'], dtype=object) ] | docs.unit21.ai |
You are looking at documentation for an older release. Not what you want? See the current release documentation.
The first way
Select a topic to edit by ticking its respective checkbox.
Click
on the Action bar, then click
Edit
from the drop-down menu that appears.
Make changes on the topic. Leave the reason for editing in the Reason field if needed.
The second way
Follow the steps in the Editing a topic section for regular users. | https://docs-old.exoplatform.org/public/topic/PLF50/PLFUserGuide.BuildingYourForum.Moderator.ModeratingTopics.EditingTopic.html | 2022-08-08T05:14:21 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs-old.exoplatform.org |
FreestyleLineStyle(ID)
base classes —
bpy_struct,
ID
- class bpy.types.FreestyleLineStyle(ID)
Freestyle line style, reusable by multiple line sets
- alpha
Base alpha transparency, possibly modified by alpha transparency modifiers
- Type
float in [0, 1], default 1.0
- alpha_modifiers
List of alpha transparency modifiers
- Type
LineStyleAlphaModifiers
bpy_prop_collectionof
LineStyleAlphaModifier, (readonly)
- caps
Select the shape of both ends of strokes
BUTTButt – Butt cap (flat).
ROUNDRound – Round cap (half-circle).
SQUARESquare – Square cap (flat and extended).
- Type
enum in [‘BUTT’, ‘ROUND’, ‘SQUARE’], default ‘BUTT’
- chaining
Select the way how feature edges are jointed to form chains
PLAINPlain – Plain chaining.
SKETCHYSketchy – Sketchy chaining with a multiple touch.
- Type
enum in [‘PLAIN’, ‘SKETCHY’], default ‘PLAIN’
- color
Base line color, possibly modified by line color modifiers
- Type
float array of 3 items in [0, inf], default (0.0, 0.0, 0.0)
- color_modifiers
List of line color modifiers
- Type
LineStyleColorModifiers
bpy_prop_collectionof
LineStyleColorModifier, (readonly)
- geometry_modifiers
List of stroke geometry modifiers
- Type
LineStyleGeometryModifiers
bpy_prop_collectionof
LineStyleGeometryModifier, (readonly)
- integration_type
Select the way how the sort key is computed for each chain
MEANMean – The value computed for the chain is the mean of the values obtained for chain vertices.
MINMin – The value computed for the chain is the minimum of the values obtained for chain vertices.
MAXMax – The value computed for the chain is the maximum of the values obtained for chain vertices.
FIRSTFirst – The value computed for the chain is the value obtained for the first chain vertex.
LASTLast – The value computed for the chain is the value obtained for the last chain vertex.
- Type
enum in [‘MEAN’, ‘MIN’, ‘MAX’, ‘FIRST’, ‘LAST’], default ‘MEAN’
- length_max
Maximum curvilinear 2D length for the selection of chains
- Type
float in [0, 10000], default 10000.0
- length_min
Minimum curvilinear 2D length for the selection of chains
- Type
float in [0, 10000], default 0.0
- material_boundary
If true, chains of feature edges are split at material boundaries
- Type
boolean, default False
- panel
Select the property panel to be shown
STROKESStrokes – Show the panel for stroke construction.
COLORColor – Show the panel for line color options.
ALPHAAlpha – Show the panel for alpha transparency options.
THICKNESSThickness – Show the panel for line thickness options.
GEOMETRYGeometry – Show the panel for stroke geometry options.
TEXTURETexture – Show the panel for stroke texture options.
- Type
enum in [‘STROKES’, ‘COLOR’, ‘ALPHA’, ‘THICKNESS’, ‘GEOMETRY’, ‘TEXTURE’], default ‘STROKES’
- sort_key
Select the sort key to determine the stacking order of chains
DISTANCE_FROM_CAMERADistance from Camera – Sort by distance from camera (closer lines lie on top of further lines).
2D_LENGTH2D Length – Sort by curvilinear 2D length (longer lines lie on top of shorter lines).
PROJECTED_XProjected X – Sort by the projected X value in the image coordinate system.
PROJECTED_YProjected Y – Sort by the projected Y value in the image coordinate system.
- Type
enum in [‘DISTANCE_FROM_CAMERA’, ‘2D_LENGTH’, ‘PROJECTED_X’, ‘PROJECTED_Y’], default ‘DISTANCE_FROM_CAMERA’
- sort_order
Select the sort order
DEFAULTDefault – Default order of the sort key.
REVERSEReverse – Reverse order.
- Type
enum in [‘DEFAULT’, ‘REVERSE’], default ‘DEFAULT’
- texture_slots
Texture slots defining the mapping and influence of textures
- Type
LineStyleTextureSlots
bpy_prop_collectionof
LineStyleTextureSlot, (readonly)
- thickness
Base line thickness, possibly modified by line thickness modifiers
- Type
float in [0, 10000], default 3.0
- thickness_modifiers
List of line thickness modifiers
- Type
LineStyleThicknessModifiers
bpy_prop_collectionof
LineStyleThicknessModifier, (readonly)
- thickness_position
Thickness position of silhouettes and border edges (applicable when plain chaining is used with the Same Object option)
CENTERCenter – Silhouettes and border edges are centered along stroke geometry.
INSIDEInside – Silhouettes and border edges are drawn inside of stroke geometry.
OUTSIDEOutside – Silhouettes and border edges are drawn outside of stroke geometry.
RELATIVERelative – Silhouettes and border edges are shifted by a user-defined ratio.
- Type
enum in [‘CENTER’, ‘INSIDE’, ‘OUTSIDE’, ‘RELATIVE’], default ‘CENTER’
- thickness_ratio
A number between 0 (inside) and 1 (outside) specifying the relative position of stroke thickness
- Type
float in [0, 1], default 0.5
- use_angle_max
Split chains at points with angles larger than the maximum 2D angle
- Type
boolean, default False
- use_angle_min
Split chains at points with angles smaller than the minimum 2D angle
- Type
boolean, default False
- use_same_object
If true, only feature edges of the same object are joined
- Type
boolean, default True
- classmethod bl_rna_get_subclass(id, default=None)
- Parameters
id (string) – The RNA type identifier.
- Returns
The RNA type or default when not found.
- Return type
bpy.types.Structsubclass
Inherited Properties
Inherited Functions
References | https://docs.blender.org/api/3.3/bpy.types.FreestyleLineStyle.html | 2022-08-08T05:18:40 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.blender.org |
Custom Schedules
Cloudback provides build-in schedules for daily backups out of the box. If you want to backup on a weekly or monthly basis, you can create your own schedule using the
Schedule Manager from the
Main Menu. Once you create your own schedule, it becomes available in the
Schedule dropdown box of repository settings.
Example: Every Monday Morning Schedule
Let’s create
Every Monday Morning schedule for weekly backups step-by-step:
- Open the
Schedule Managerfrom the
Main Menu.
- Click the
Add a new schedulebutton at the bottom right corner, it will open the
Add Scheduledialog.
- Type “Every Monday Morning” into the
Schedule nametext box
- Choose “4” in the
Specific hour (choose one or many)section. It means backup will start at 4 am.
- Switch to the
Daytab
- Choose “Monday” in the
Specific day of week (choose one or many)section. It means backup will start Monday only.
- Click the
Savebutton, and it will close the
Add Scheduledialog.
- Click the
Closebutton, and it will close the
Schedule Managerdialog.
- Find your repository, open repository settings and change
Scheduleto
Every Monday Morning.
- Save repository settings. All done.
| https://docs.cloudback.it/features/custom-schedule/ | 2022-08-08T04:14:39 | CC-MAIN-2022-33 | 1659882570765.6 | [array(['/static/features/custom-schedule-1.png', 'Card View'],
dtype=object)
array(['/static/features/custom-schedule-2.png', 'Card View'],
dtype=object)
array(['/static/features/custom-schedule-3.png', 'Card View'],
dtype=object)
array(['/static/features/custom-schedule-4.png', 'Card View'],
dtype=object)
array(['/static/features/custom-schedule-5.png', 'Card View'],
dtype=object) ] | docs.cloudback.it |
4.8.2. Linear Density —
MDAnalysis.analysis.lineardensity¶
A tool to compute mass and charge density profiles along the three cartesian axes [xyz] of the simulation cell. Works only for orthorombic, fixed volume cells (thus for simulations in canonical NVT ensemble).
- class
MDAnalysis.analysis.lineardensity.
LinearDensity(select, grouping='atoms', binsize=0.25, **kwargs)[source]¶
Linear density profile
Example
First create a
LinearDensityobject by supplying a selection, then use the
run()method. Finally access the results stored in results, i.e. the mass density in the x direction.
ldens = LinearDensity(selection) ldens.run() print(ldens.results.x.pos)
Changed in version 2.0.0: Results are now instances of
Resultsallowing access via key and attribute. | https://docs.mdanalysis.org/2.0.0-dev0/documentation_pages/analysis/lineardensity.html | 2022-08-08T03:34:23 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.mdanalysis.org |
Managing Browser History Using Client Script
As a page developer, you can manage browser history entries and provide logical navigation by using the ScriptManager and ScriptManagerProxy server controls. You can also manage browser history through client script. You can enable history support in the client through the ScriptManager control. This generates client objects that you can use to manage browser history.
A history point is a logical navigation point in the Web application that can be represented through state information. The state information can be used to restore the Web application to a previous state, either directly with state data or through an identifier to state information that is stored elsewhere.
History points are stored in the browser's history stack only as URLs. History state is managed as data in a query string or as a URL fragment value that is marked with the "#" character. Because of size restrictions on URLs, the state information that you create must be as small as possible.
The following example shows a URL that contains enough history point data to identify the state. From this, the application can re-create the page at that state.
When a user clicks the browser's Back button, the browser navigates through previously-viewed URLs, which will include URLs that contain history-point state. Client code in the Web page detects that the URL contains history state data and raises the client Sys.Application.navigate event. You can handle the event to reset the application to the state whose information is contained in the parameter values that are passed to the event.
Note
To work with the example code in this topic, you will need Visual Studio 2008 Service Pack 1 or a later release.
Enabling Browser History Management
In order to use history management, you must enable it through the ScriptManager server control. By default, history support is not enabled. When history is enabled, it is implemented differently for each browser. For Internet Explorer, an iframe element is rendered to the client, which can cause an additional request to the server. The model is therefore an opt-in approach.
The following example shows how to enable history declaratively through the ScriptManager control.
<asp:ScriptManager
Creating Browser History Points
To create a browser history point, you call the Sys.Application.addHistoryPoint method. This method lets you define the history entry, its title, and any state that is required. You can use the state data to re-create the state of the page when a subsequent history navigation event is raised.
When you add a history point, or when the page is navigated and contains history state in the URL, the Sys.Application.navigate event is raised. This provides an event in the client that notifies you that history state has changed. You can handle the navigate event to re-create objects by using state data or to perform other operations.
The following example shows how you can manage history points in client code.
<html xmlns=""> <head id="Head1" runat="server"> <title>Microsoft ASP.NET 3.5 Extensions</title> <link href="../../include/qsstyle.css" type="text/css" rel="Stylesheet" /> <script type="text/javascript"> function page_init() { Sys.Application.add_navigate(onStateChanged); var cb1 = $get('clientButton1'); var cb2 = $get('clientButton2'); var cb3 = $get('clientButton3'); $addHandler(cb1, "click", clientClick); cb1.dispose = function() { $clearHandlers(cb1); } $addHandler(cb2, "click", clientClick); cb2.dispose = function() { $clearHandlers(cb2); } $addHandler(cb3, "click", clientClick); cb3.dispose = function() { $clearHandlers(cb3); } } function onStateChanged(sender, e) { // When the page is navigated, this event is raised. var val = parseInt(e.get_state().s || '0'); Sys.Debug.trace("Navigated to state " + val); $get("div2").innerHTML = val; } function clientClick(e) { // Set a history point in client script. var val = parseInt(e.target.value); Sys.Application.addHistoryPoint({s: val}, "Click Button:" + val); Sys.Debug.trace("History point added: " + val); } </script> </head> <body> <form id="form1" runat="server"> <div> <asp:ScriptManager <script type="text/javascript"> Sys.Application.add_init(page_init); </script> <h2> Microsoft ASP.NET 3.5 Extensions: Managing Browser History with Client Script</h2> <p /> <div id="Div1" class="new"> <p> This sample shows:</p> <ol> <li>The <code>Sys.Application</code> object and the <code>navigate</code> event and <code>addHistoryPoint</code> method.</li> <li>The <code>addHistoryPoint</code> method demonstrates addition of titles.</li> </ol> </div> <p> </p> <h2>Example 1: Managing browser history in client script</h2> <p>This example includes three buttons. The handler for each button's <code>click</code> event sets navigation history points using the <code>Sys.Application</code> object. The script used here, makes use of the <code>Sys.Debug</code> class to dump trace information to the TEXTAREA at the bottom of the page. </p> <p>When you click the buttons, and history points are added, you will be able to see the list of history entries and their titles in the "Recent Pages" drop-down in Internet Explorer, for example. </P> <p>To see history in action, perform the following steps:</p> <ol> <li>Press <b>1</b>. See the trace output.</li> <li>Press <b>3</b>. See the trace output.</li> <li>Press <b>2</b>. See the trace output.</li> <li>Press the browser's Back button. Notice that the page is refreshed with previous data and that trace information shows this.</li> </ol> <div id="div2" class="box">0</div><p></p> <input type="button" id="clientButton1" value="1" /> <input type="button" id="clientButton2" value="2" /> <input type="button" id="clientButton3" value="3" /> <br /><br /> <textarea id="TraceConsole" cols="40" rows="5"></textarea> </div> </form> </body> </html> | https://docs.microsoft.com/en-us/previous-versions/cc488538(v=vs.140) | 2022-08-08T04:53:58 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.microsoft.com |
A dummy file system is created to support OS upgrade
Commands used for OS upgrade are
image install ftp://<user>@<ftp server ip>/<image>
image install scp://<user>@<scp server ip>/<image>
Copy OS image not supported. Only full install supported.
Reboot after OS upgrade makes use of ‘image switch’ and makes the newly installed OS as the primary OS. | https://docs.vmware.com/en/VMware-Smart-Assurance/10.1.8/ncm-dsr-support-matrix-1018/GUID-EB9818F5-CFDD-46F1-9EBB-2E5C551483AF.html | 2022-08-08T04:37:53 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.vmware.com |
Events
The Events section controls periodic events that the server will execute.
Event parameters
Trigger Tab
Only trigger this timer once :
Controls whether the timer will trigger once or multiple times.
Trigger Times :
Trigger at this time
Trigger between these times at the specified interval
This timer triggers on the following days of the week :
This timer triggers on the following days of the month :
Action Tab
Shutdown FTGate and restart after given interval
Execute enabled tasks (in sequence) :
Network profile
Run the following script
Backup configuration
Start AutoUpdate | http://docs.ftgate.com/ftgate-documentation/web-admin-interface/events/ | 2022-08-08T04:05:12 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.ftgate.com |
Archive
This page gives access to the message archive in FTGate and allows messages from the archive to be forwarded to other addresses. This can be used to locate messages between given time periods for specific address or with specific entries in the subject.
The page also contains a preview page which will display the first 2KB of the message.
There are more features available in the stand alone archive tool FTGate Archive.
Messages in the list may be selected and then redirected to a mailbox. This will cause the message to be delivered without any filtering being applied.
Finding archived messages
To locate a message select the start and end dates for the search and then enter text for the from, to and subject, then click find.
When searching for a message a partial match system is used.
e.g. to find messages from [email protected] you could search with the from line set to:
bob
[email protected]
ftgate.com
but NOT *@ftgate.com
Selecting Messages
There are a number of options to select messages for forwarding or resending.
Clicking on a message will select the specific message and deselect any other selected messages.
Clicking on a message, then holding down SHIFT and clicking on another will select both messages and the messages between them.
Clicking on a message, then holding down CTRL and clicking another message will add the message to the selection
Pressing CTRL-A will select all the messages. | http://docs.ftgate.com/ftgate-documentation/web-admin-interface/general-tab/archive/ | 2022-08-08T05:12:07 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.ftgate.com |
Move
Transform
Curve
Curve Edit Tools >
Soft Edit
The SoftEditCrv command moves the curve area surrounding a selected point smoothly relative to the distance from selected point.
Distance
The distance, in model units, along the curve from the editing point over which the strength of the editing falls off smoothly.
Either enter a value or click on the curve to set the distance.
Copy
Specifies whether or not the objects are copied. A plus sign appears at the cursor when copy mode is on. The RememberCopyOptions command determines whether the selected option is used as the default.
FixEnds
Determines whether or not the position of the curve ends is fixed.
If the Distance value is larger than the distance to one or both ends of the curve, the end of the curve will be allowed to move.
If the Distance value is larger than the distance to one or both ends of the curve, the end of the curve will not be allowed to move.
With the Yes option, all control points are moved as they would be according to the normal falloff except the end control points. This can lead to an abrupt change in the curve near the end on dense curves.
Edit curves
Rhinoceros 7 © 2010-2022 Robert McNeel & Associates. 28-Jul-2022 | http://docs.mcneel.com/rhino/7/help/en-us/commands/softeditcrv.htm | 2022-08-08T04:38:28 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.mcneel.com |
Microsoft Forefront Client Security
Updated: November 1, 2012
Applies To: Forefront Client Security
Welcome to the Microsoft Forefront Client Security technical library. The technical documentation for Client Security consists of the following categories:.
Contains information to help you deploy Client Security in your environment.
Contains information to help you manage and maintain your Client Security environment, including administration, disaster recovery, and performance and scalability.
This section provides guidance for diagnosing and resolving installation and operational issues.
Use this documentation to learn about creating and helping to secure a Client Security environment. This documentation discusses potential threats to each component of the Client Security infrastructure and makes recommendations for reducing those threats.
Use this documentation to learn about Client Security components and how you can use them to more effectively manage and troubleshoot Client Security.
Community Contributed Content
Contains information contributed by members of the community that can help you to manage Client Security in your environment.
Note
Information for end users:If you are running Client Security on a computer that is not a member of a corporate domain, you can find information about configuring Client Security at Protecting home computers. If you are having problems running or updating Client Security on a computer that is a member of a corporate domain, contact your help desk. | https://docs.microsoft.com/en-us/previous-versions/tn-archive/bb432630(v=technet.10)?redirectedfrom=MSDN | 2022-08-08T03:42:11 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.microsoft.com |
As a final step before going live, an integration walkthrough will be performed (typically via a conference call). The goals of this call and typical topic areas are to:
- Confirm the functionality of the integration.
- Understand and document the customer signup and usage UI workflow for the integration.
- Establish that mutual contractual commitments have been met.
- Evaluate ease of deployment.
- Create a plan for fine-tuning where required.
Where deployment of New Relic requires working accounts or deployed applications, provision should be made in advance of the call for these elements to be in place. | https://docs.newrelic.com/docs/new-relic-partnerships/partner-integration-guide/getting-started/walkthrough-signoff/ | 2022-08-08T04:21:23 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.newrelic.com |
Microsoft Learn for Educators
Program overview
Microsoft Learn for Educators enables you to bring Microsoft Official Curriculum and the instructor-led training materials into your classroom to build your students’ technical skills for the future. Eligible educators and faculty members at higher education institutions that offer accredited degree, diploma, certificate, or continuing or further education programs, such as colleges, universities, community colleges, polytechnics, and some STEM-focused secondary schools can access Microsoft ready-to-teach curriculum and teaching materials aligned to industry-recognized Microsoft Certifications. These certifications augment a student’s existing degree path and validate the skills needed to be successful across a variety of technical careers.
Program options and sign up
Are you a faculty member looking to build your students’ technical skills? Is your department or institution looking to transform its technology curriculum to prepare students for future careers? Look at our options to determine which approach is right for you.
1 free offerings
Program offerings
Microsoft Fundamentals Microsoft fundamentals curriculum provides foundational-level knowledge of Microsoft cloud and business application services. They are ideal for students starting or thinking about a career in technology.
Microsoft Advanced Role-Based Microsoft Advanced Role-Based curriculum provides associate level knowledge of Microsoft cloud and business application services. They are ideal for students looking to begin learning valuable job role skills.
Microsoft curriculum and teaching materials
Microsoft Learn for Educators provides access to a curriculum of Official Microsoft Learning Products. Each course covers Microsoft Certification exam objectives through lessons based on real-world scenarios and practice exercises. These materials have been designed for instructor-led and blended learning models and can be delivered remotely or in person. They directly align to Microsoft Learn online learning paths, which are collections of training modules, that are delivered wholesale or via the modular components.
- or PearsonVue testing center.
Lab Seats
Lab seats provide you with the ability to provision Azure-based labs to your students for enabling their hands-on experience and skills validation. Through this MSLE program benefit we provide you with free/discounted lab seats, as an opportunity to incorporate learning experiences above and beyond those found in the Microsoft Learn sandbox.
Note
This offer is subject to change in the future with or without prior notice
Office hours
By taking advantage of our Microsoft Learn for Educators office hours, you will have an opportunity to talk with our Microsoft Learn for Educators program team. Office hours are an opportunity for you to gain answers to questions about the program, seek advice, and learn about additional tools and opportunities to help you and your students be successful.
Training program manager
Once your school has joined the Microsoft Learn for Educators program, your institution may
Our program provides you with materials.
A learning path is available for curriculum integration support. It provides you with guidance on different approaches for implementing certification at the course and program level and the benefits it offers students.
Microsoft Learn for Educators community
Keep learning! Engage and collaborate with an exclusive, global community of educators. As part of the program, you will gain access to a Microsoft Teams-based online network of fellow educators and Microsoft team members. This is your opportunity to keep learning, to share, connect, discover best practices, and find support to make a huge impact in your classes. In the Microsoft Learn for Educators Teams community, we also share announcements, details about exclusive events, and other training opportunities to help you develop your technical acumen and new techniques for bringing technical training to life.
Important
Please note that access to the community requires you to opt in to the Microsoft Teams Educator community during sign option, Educator, visit the Microsoft Educator Center for free training and resources to enhance your use of technology in the classroom. | https://docs.microsoft.com/en-us/learn/educator-center/programs/msle/overview | 2022-08-08T05:51:26 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.microsoft.com |
the comms plan and workers
need to have a phone number. If you haven’t got a comms”
There is also a new feature where you can send the messages to chosen groups.
| https://docs.okalone.net/send-an-sms-to-all-lone-workers/ | 2022-08-08T04:15:58 | CC-MAIN-2022-33 | 1659882570765.6 | [array(['https://docs.okalone.net/wp-content/uploads/2020/06/send-sms-to-all-lone-workers.jpg',
None], dtype=object) ] | docs.okalone.net |
Select all closed curves.
Select all curves.
Select all lines.
Select objects with the specified linetype.
Select all open curves.
Select all polylines.
Select all curves shorter than a specified length.
Shorten a curve to the new picked endpoints and select.
Rhino for Mac © 2010-2017 Robert McNeel & Associates. 24-Oct-2017 | https://docs.mcneel.com/rhino/mac/help/en-us/toolbarmap/select_curves_toolbar.htm | 2022-08-08T04:40:43 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.mcneel.com |
: redeemDocRequest
function redeemDocRequest(uint256 docAmount) public
There is only one redeem request per user during a settlement. A new reedeem request is created if the user invokes it for the first time or has its value updated if it already exists.
Parameters of the operation
The docAmount parameter
It is the amount that the contract will use to create or update a DOCs redeem request. This parameter uses a precision of the type
reservePrecision
that contains 18 decimal places and can be greater than user's balance at request time, allowing to, for example, redeem all future user's DoCs. reedem request, but the system can not find its address as an active user for the current settlement. It is a very rare condition in which a transaction reverts with the error message:
This is not an active redeemer
. If this situation occurs then you can contact the
Money on Chain team
to help you.
Not allowed redeemer:
When a user tries to update a reedem redeemDocRequest operation has no commissions, but when the settlement runs, the total requested amount to redeem will pay commissions. This fee will be the same as the
REDEEM_DOC_FEES_RBTC
value. The commission fees are explained in
this
section.
Redeeming DoCs
On Settlement: alterRedeemRequestAmount
Last modified
9mo ago
Copy link
Outline
Parameters of the operation
The docAmount parameter
Gas limit and gas price
Possible failures
The contract is paused:
Settlement is not ready:
Not enough gas:
Not active redeemer:
Not allowed redeemer:
Commissions | https://docs.moneyonchain.com/main-rbtc-contract/integration-with-moc-platform/getting-docs/redeeming-docs/redeemdocrequest | 2022-08-08T03:49:43 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.moneyonchain.com |
Watch these videos to see what you can do with Smart Folders.
You are here
Smart Folders videos
Sending feedback to the Alfresco documentation team
You don't appear to have JavaScript enabled in your browser. With JavaScript enabled, you can provide feedback to us using our simple form. Here are some instructions on how to enable JavaScript in your web browser. | http://docs.alfresco.com/5.1/topics/smart-video-tutorials.html | 2017-12-11T03:39:45 | CC-MAIN-2017-51 | 1512948512121.15 | [] | docs.alfresco.com |
Recently Viewed Topics
Copy the XSLT to the .audit
Once the XSL Transform works as intended, copy the XSLT lines of interest (lines 5-8 in this example) to the
.audit check.
xsl_stmt: "<xsl:template match=\"result\">"
xsl_stmt: "<xsl:for-each select=\"entry\">"
xsl_stmt: "+ <xsl:value-of select=\"name\"/>"
xsl_stmt: "</xsl:for-each>"
Each line of the custom XSL transform must be placed into its own
xsl_stmt element enclosed in double quotes. Since the
xslt_stmt element uses double quotes to encapsulate the
<xsl> statements, any double quotes used must be escaped.
Note: Escaping the double quotes is important and not doing so risks errors in check execution.
/usr/bin/xsltproc {XSLT file} {Source XML}
In the next step you can see several examples of properly escaped double quotes. | https://docs.tenable.com/nessus/compliancechecksreference/Content/CopyTheXSLTToTheAudit.htm | 2017-12-11T04:02:40 | CC-MAIN-2017-51 | 1512948512121.15 | [] | docs.tenable.com |
defmulti¶
defn¶
New in version 0.10.0.
defn lets you arity-overload a function by the given number of
args and/or kwargs. This version of
defn works with regular syntax and
with the arity overloaded one. Inspired by Clojures take on
defn.
=> (require [hy.contrib.multi [defn]]) => (defn fun ... ([a] "a") ... ([a b] "a b") ... ([a b c] "a b c")) => (fun 1) "a" => (fun 1 2) "a b" => (fun 1 2 3) "a b c" => (defn add [a b] ... (+ a b)) => (add 1 2) 3
defmulti¶
New in version 0.12.0.
defmulti,
defmethod and
default-method lets you define
multimethods where a dispatching function is used to select between different
implementations of the function. Inspired by Clojure's multimethod and based
on the code by Adam Bard.
=> (require [hy.contrib.multi [defmulti defmethod default-method]]) => (defmulti area [shape] ... "calculate area of a shape" ... (:type shape)) => (defmethod area "square" [square] ... (* (:width square) ... (:height square))) => (defmethod area "circle" [circle] ... (* (** (:radius circle) 2) ... 3.14)) => (default-method area [shape] ... 0) => (area {:type "circle" :radius 0.5}) 0.785 => (area {:type "square" :width 2 :height 2}) 4 => (area {:type "non-euclid rhomboid"}) 0
defmulti is used to define the initial multimethod with name, signature
and code that selects between different implementations. In the example,
multimethod expects a single input that is type of dictionary and contains
at least key :type. The value that corresponds to this key is returned and
is used to selected between different implementations.
defmethod defines a possible implementation for multimethod. It works
otherwise in the same way as
defn, but has an extra parameters
for specifying multimethod and which calls are routed to this specific
implementation. In the example, shapes with "square" as :type are routed to
first function and shapes with "circle" as :type are routed to second
function.
default-method specifies default implementation for multimethod that is
called when no other implementation matches.
Interfaces of multimethod and different implementation don't have to be exactly identical, as long as they're compatible enough. In practice this means that multimethod should accept the broadest range of parameters and different implementations can narrow them down.
=> (require [hy.contrib.multi [defmulti defmethod]]) => (defmulti fun [&rest args] ... (len args)) => (defmethod fun 1 [a] ... a) => (defmethod fun 2 [a b] ... (+ a b)) => (fun 1) 1 => (fun 1 2) 3 | https://hy.readthedocs.io/en/stable/contrib/multi.html | 2017-12-11T03:41:10 | CC-MAIN-2017-51 | 1512948512121.15 | [] | hy.readthedocs.io |
Toggle navigation
Documentation Home
Online Store
Support
All Documentation for Licenses keyword
+ Filter by product
MemberMouse WooCommerce Plus
Wishlist Member Easy Digital Downloads Plus
Wishlist Member WooCommerce Plus
MemberMouse WooCommerce Plus
Can I use only the bundle, without the MemberMouse WooCommerce Plus plugin?
Wishlist Member Easy Digital Downloads Plus
Can I use the External Membership Sites Add-Ons Bundle without Wishlist Member Easy Digital Downloads Plus?
In what cases do I need to purchase the Remote Access add-on separately?
Do I need to purchase the External Membership Sites Ass-Ons Bundle?
Wishlist Member WooCommerce Plus | http://docs.happyplugins.com/doc/keyword/licenses | 2017-12-11T03:56:58 | CC-MAIN-2017-51 | 1512948512121.15 | [] | docs.happyplugins.com |
Development¶
Accessing the GIT Repository¶
We use the revision control system git to develop Joern. If you want to participate in development or test the development version, you can clone the git repository by issuing the following command:
git clone
Optionally, change to the branch of interest. For example, to test the development version, issue the following:
git checkout dev
If you want to report issues or suggest new features, please do so via . For fixes, please fork the repository and issue a pull request or alternatively send a diff to the developers by mail.
Modifying Grammar Definitions¶
When building Joern, pre-generated versions of the parsers will be used by default. This is fine in most cases, however, if you want to make changes to the grammar definition files, you will need to regenerate parsers using the antlr4 tool. For this purpose, it is highly recommended to use the optimized version of ANTLR4 to gain maximum performance.
To build the optimized version of ANTLR4, do the following:
git clone cd antlr4 mvn -N install mvn -DskipTests=true -Dgpg.skip=true -Psonatype-oss-release -Djava6.home=$PATH_TO_JRE install
If the last step gives you an error, try building without
-Psonatype-oss-release.
mvn -DskipTests=true -Dgpg.skip=true -Djava6.home=$PATH_TO_JRE install
Next, copy the antlr4 tool and runtime to the following locations:
cp tool/target/antlr4-$VERSION-complete.jar $JOERN/ cp runtime/Java/target/antlr4-runtime-$VERSION-SNAPSHOT.jar $JOERN/lib
where $JOERN is the directory containing the $JOERN installation.
Parsers can then be regenerated by executing the script
$JOERN/genParsers.sh. | http://joern.readthedocs.io/en/latest/development.html | 2018-03-17T14:44:33 | CC-MAIN-2018-13 | 1521257645177.12 | [] | joern.readthedocs.io |
For Developers¶
This section of the documentation covers things that will be useful for those already contributing to NFLWin.
Note
Unless stated otherwise assume that all filepaths given in this section start at the root directory for the repo.
Testing Documentation¶
Documentation for NFLWin is hosted at Read the Docs, and is built automatically when changes are made on the master branch or a release is cut. However, oftentimes it’s valuable to display NFLWin’s documentation locally as you’re writing. To do this, run the following:
$ ./build_local_documentation.sh
When that command finishes, open up
doc/index.html in your browser of choice to see the site.
Updating the Default Model¶
NFLWin comes with a pre-trained model, but if the code generating that model is updated the model itself is not. So you have to update it yourself. The good news, however, is that there’s a script for that:
$ python make_default_model.py
Note
This script hardcodes in the seasons to use for training and testing samples. After each season those will likely need to be updated to use the most up-to-date data.
Note
This script requires
matplotlib in order to run, as it produces a
validation plot for the documentation.
Cutting a New Release¶
NFLWin uses semantic versioning, which basically boils down to the following (taken directly from the webpage linked earlier in this sentence):
Given a version number MAJOR.MINOR.PATCH, increment the:
- MAJOR version when you make incompatible API changes,
- MINOR version when you add functionality in a backwards-compatible manner, and
- PATCH version when you make backwards-compatible bug fixes.
Basically, unless you change something drastic you leave the major version alone (the exception being going to version 1.0.0, which indicates the first release where the interface is considered “stable”).
The trick here is to note that information about a new release must live in a few places:
- In
nflwin/_version.pyas the value of the
__version__variable.
- As a tagged commit.
- As a release on GitHub.
- As an upload to PyPI.
- (If necessary) as a documented release on Read the Docs.
Changing the version in one place but not in others can have relatively minor but fairly annoying consequences. To help manage the release cutting process there is a shell script that automates significant parts of this process:
$ ./increment_version.sh [major|minor|patch]
This script does a bunch of things, namely:
- Parse command line arguments to determine whether to increment major, minor, or patch version.
- Makes sure it’s not on the master branch.
- Makes sure there aren’t any changes that have been staged but not committed.
- Makes sure there aren’t any changes that have been committed but not pushed.
- Makes sure all unit tests pass.
- Compares current version in nflwin/_version.py to most recent git tag to make sure they’re the same.
- Figures out what the new version should be.
- Updates nflwin/_version.py to the new version.
- Uploads package to PyPI.
- Adds and commits nflwin/_version.py with commit message “bumped [TYPE] version to [VERSION]”, where [TYPE] is major, minor, or patch.
- Tags latest commit with version number (no ‘v’).
- Pushes commit and tag.
It will exit if anything returns with a non-zero exit status, and since it waits until the very end to upload anything to PyPI or GitHub if you do run into an error in most cases you can fix it and then just re-run the script.
The process for cutting a release is as follows:
- Make double sure that you’re on a branch that’s not
masterand you’re ready to cut a new release (general good practice is to branch off from master just for the purpose of making a new release).
- Run the
increment_version.shscript.
- Fix any errors, then rerun the script until it passes.
- Make a PR on GitHub into master, and merge it in (self-merge is ok if branch is just updating version).
- Make release notes for new release on GitHub.
- (If necessary) go to Read the Docs and activate the new release. | http://nflwin.readthedocs.io/en/stable/dev.html | 2018-03-17T14:31:35 | CC-MAIN-2018-13 | 1521257645177.12 | [] | nflwin.readthedocs.io |
Visual Studio Documentation
- Workloads
-
A full suite of tools for database developers to create solutions for SQL Server, Hadoop, and Azure ML.
Data science and analytical applications
Languages and tooling for creating datas science applications, including Python, R, and F#.
Office/SharePoint development
Create Office and SharePoint add-ins and solutions using C#, Visual Basic, and JavaScript.
- Mobile & Gaming
Create native or hybrid mobile apps that target Android, iOS, and Windows.
Mobile development
- Features
Develop
Write and manage your code using the code editor.
Build
Compile and build your source code.
Debug
Investigate and fix problems with your code.
Test
Organize your testing processes.
Deploy
Share your apps using Web Deploy, InstallShield, and Continuous Integration, and more.
Collaborate
Share code using version control technologies such as Git and TFVC.
Improve Performance
Identify bottlenecks and optimize code performance by using diagnostic tools.
Extend
Add your own functionality to the Visual Studio IDE to improve your development experience.
- Languages
Visual C#
A simple, modern, type safe, object oriented programming language used for building applications that run on the .NET Framework.
Visual Basic
A fast, easy to learn programming language you can use to create applications.
Visual C++
A powerful and flexible programming language and development environment for creating applications for Windows, Linux, iOS, and Android.
Visual F#
A strongly typed, cross platform programming language that is most often used as a cross platform CLI language, but can also be used to generate JavaScript and GPU code.. | https://docs.microsoft.com/en-us/visualstudio/ | 2017-01-16T12:53:31 | CC-MAIN-2017-04 | 1484560279176.20 | [] | docs.microsoft.com |
@PublicApi public interface JiraThreadLocalUtil
The main purpose of this component is to setup and clear
ThreadLocal variables that
can otherwise interfere with the smooth running of JIRA by leaking resources or allowing
stale cached information to survive between requests.
JiraServiceor as a
PluginJobdo not need to use this component, because the scheduler will perform the cleanup automatically as part of the service's execution lifecycle. However, any plugin that creates its own threads for background processing must use this component to guard its work. Prior to JIRA v6.0, the only way to do this was to access the
jira-coreclass
JiraThreadLocalUtilsdirectly. You must place the cleanup call to
postCall(Logger)or
postCall(Logger, WarningCallback)in a
finallyblock to guarantee correct behaviour. For example:
public void run() { jiraThreadLocalUtil.preCall(); try { // do runnable code here } finally { jiraThreadLocalUtil.postCall(log, myWarningCallback); } }
void preCall()
ThreadLocalenvironment for the runnable code to execute in.
void postCall(org.apache.log4j.Logger log)
postCall(log, null).
log- as for
postCall(Logger, WarningCallback)
void postCall(org.apache.log4j.Logger log, JiraThreadLocalUtil.WarningCallback warningCallback)
finallyblock to clear up
ThreadLocals once the runnable stuff has been done.
log- the log to write error messages to in casse of any problems
warningCallback- the callback to invoke in case where problems are detected after the runnable code is done running and its not cleaned up properly. This may be
null, in which case those problems are logged as errors. | https://docs.atlassian.com/software/jira/docs/api/6.3.4/com/atlassian/jira/util/thread/JiraThreadLocalUtil.html | 2021-09-16T20:02:51 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.atlassian.com |
Using the default web-based client
The default web-based browser client simulates a native secure shell or remote desktop connections that you can open from within the Admin Portal.
The following diagram illustrates the basic flow of operation for logging on using the default web browser client.
The network infrastructure might be internet connectivity for access to Centrify-managed tenants, an internal corporate network inside or outside of a firewall, or a private or public cloud instance that you manage for your organization.
Operationally, using the default web-based browser client is similar to using any other SSH or RDP program. Most features work as you would expect.
Working with SSH sessions
Logging on to a target Linux or UNIX server or to a network device that supports SSH opens a new web-based SSH session terminal. You can then resize the session window by dragging its borders, maximize or minimize the display area while the session is open, or close the window to end the session. You must use a mouse to copy and paste in the secure shell, however, because Ctrl‑C is used to terminate operations in UNIX‑based environments.
Working with RDP sessions
Logging on to a target Windows system opens a new web-based RDP connection. You then can resize the window by dragging its borders, maximize or minimize the display area while the session is open, or close the window to end the session.
Menus and keyboard shortcuts operate in the same way as when you log on locally to a Windows computer.
However, there are some features that you might not be able to use when you access a target system with a remote desktop session and the default web-based browser RDP client. For example, the following features are not supported:
- Printer redirects
- Audio
- Drive redirects
- COM port redirects
Changing the display size for web-based client sessions
You can set a user preference to specify the default window size for remote sessions to adjust to different display requirements. For example, if you are viewing sessions using a tablet or a computer with a small monitor you might want to change the display size to suit a smaller screen than when you are working with a full-scale desktop monitor.
If you have administrative rights for the Privileged Access Service, you can change the window size for remote sessions from the Admin Portal by setting a user preference.
For more information about changing the window size for web-based client sessions, see Selecting user preferences.
Web-based RDP browser client keyboard shortcuts
The following table shows the web-based RDP browser client/desktop app keyboard shortcuts and how they correspond to standard Windows keyboard shortcuts. They also apply to Windows desktop applications launched via Admin Portal. | https://docs.centrify.com/Content/Infrastructure/remote/RemoteClientDefaultWeb.htm | 2021-09-16T19:20:41 | CC-MAIN-2021-39 | 1631780053717.37 | [array(['../privileged-identity-mgmt/cps-images/Basic-flow-default-web-client.png',
None], dtype=object) ] | docs.centrify.com |
This Guide has shown us how we might configure a poller integration - polling data from the table API of a Personal Developer Instance (PDI) and then configure an inbound message to process that data. As such, we created the following records:
Poll Processor
Poller
Inbound Message
Fields
We also made further configuration changes to both the remote instance (PDI) and internal instance, in order to facilitate the following:
Message Identification
Bond Identification. | https://docs.sharelogic.com/unifi/integration-guides/incident-update-poller-guide/conclusion | 2021-09-16T17:55:46 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.sharelogic.com |
WHEN clauses are processed sequentially.
The first WHEN clause search_condition_n that is TRUE returns the value of its associated scalar_expression_n as its result. The evaluation process then ends.
If no search_condition_n is TRUE, then scalar_expression_m, the argument of the ELSE clause, is the result.
If no ELSE clause is defined, then the default value for the result is NULL.
You can use a scalar subquery in the WHEN clause, THEN clause, and ELSE clause of a CASE expression. If you use a non-scalar subquery (a subquery that returns more than one row), a runtime error is returned.
Recommendation: Do not use the built-in functions CURRENT_DATE or CURRENT_TIMESTAMP in a CASE expression that is specified in a partitioning expression for a partitioned primary index (PPI). In this case, all rows are scanned during reconciliation. | https://docs.teradata.com/r/756LNiPSFdY~4JcCCcR5Cw/cDj4lHbfuessh2Kp~6~wRw | 2021-09-16T18:25:09 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.teradata.com |
Every time a device updates a sensor value in a variable, a data-point or "dot" is created. Ubidots stores dots that come from your devices inside variables, and these stored dots have corresponding timestamps:
Ubidots Data Hierachy
Each dot contains these items:
Values
A numerical value. Ubidots accepts up to 16 floating-point length numbers.
{"value" : 34.87654974}
Timestamps
A timestamp, as best described here,. Please keep in mind that when you send data to Ubidots, you must set the timestamp in milliseconds; also, if you retrieve a dot's timestamp, it will be in milliseconds.
"timestamp" : 1537453824000
The above timestamp corresponds to Thursday, September 20, 2018 2:30:24 PM.
PRO-TIP: A useful tool to convert between Unix timestamps and human-readable dates is Epoch Converter.
Context
Numerical values are not the only data type supported; you can also store string or char data types inside what we call context. The context is a key-value object that allows you to store not only numerical but also string values. An example use of the context could be:
"context" : {"status" : "on", "weather" : "sunny"}
A context is commonly used to store the latitude and longitude coordinates of your device for GPS/tracking application use cases. All Ubidots maps uses the lat and lng keys from a dot's context to extract the coordinates of your device, in that way you just need to send a single dot with the coordinates values in the variable context to plot a map instead of sending separately both latitude and longitude in two different variables. Below you can find a typical context with coordinates values:
"context" : {"lat":-6.2, "lng":75.4, "weather" : "sunny"}
Please note that you can mix both string and numerical values in the context. If your application is for geo-localization purposes, make sure that the coordinates are set in decimal degrees. | https://docs.ubidots.com/reference/how-ubidots-works | 2021-09-16T19:31:35 | CC-MAIN-2021-39 | 1631780053717.37 | [array(['https://files.readme.io/8d8c3a6-ubidots-data.png',
'ubidots-data.png Ubidots Data Hierachy'], dtype=object)
array(['https://files.readme.io/8d8c3a6-ubidots-data.png',
'Click to close... Ubidots Data Hierachy'], dtype=object)] | docs.ubidots.com |
BasicTabControl
The BasicTabControl control displays a tab menu according to data provided by a two dimensional array. BasicTabControl doesn't rely on the Kentico database or API — you can use the control to navigate to pages outside of Kentico websites.
Tip: If you want to display a tab menu for pages on a Kentico website, you can use the CMSTabControl control, which has built-in support for loading Kentico documents.
Getting started
The following is a step-by-step tutorial that shows how to display a simple tab menu using the BasicTabControl control:
Create a new Web form in your web project.
Drag the BasicTabControl control from the toolbox onto the form.
The code of the BasicTabControl looks like this:
<cms:BasicTabControl
Add the following CSS styling code between the tags of the web form's <head> element:
<style type="text/css"> /* Tab menu class definitions */ .TabControlTable { FONT-SIZE: 14px; FONT-FAMILY: Arial,Verdana } .TabControlRow { } .TabControl { BORDER-RIGHT: black 1px solid; BORDER-TOP: black 1px solid; FONT-WEIGHT: bold; BACKGROUND: #e7e7ff; BORDER-LEFT: black 1px solid; CURSOR: pointer; COLOR: black } .TabControlSelected { BORDER-RIGHT: black 1px solid; BORDER-TOP: black 1px solid; FONT-WEIGHT: bold; BACKGROUND: #4a3c8c; BORDER-LEFT: black 1px solid; CURSOR: default; COLOR: white } .TabControlLinkSelected { COLOR: white; TEXT-DECORATION: none } .TabControlLink { COLOR: black; TEXT-DECORATION: none } .TabControlLeft { WIDTH: 1px } .TabControlRight { WIDTH: 0px } .TabControlSelectedLeft { WIDTH: 1px } .TabControlSelectedRight { WIDTH: 0px } </style>
This sets CSS styles that modify the appearance of the generated tab menu. The BasicTabControl control renders tabs even without any CSS classes, but they are extremely basic and not very user friendly.
The example uses the local <head> element only for convenience. If you wish to use the control on a Kentico website, it is recommended to define the CSS classes in the website's stylesheet through the CSS stylesheets application.
Add the following code just after the <cms:BasicTabControl> element to display a stripe under the tabs.
<hr style="width:100%; height:2px; margin-top:0px;" />
Switch to the web form's code behind and add the following reference:
using CMS.Controls.Configuration;
Add the following code to the Page_Load method:
// Defines and assigns the menu's tabs BasicTabControl1.AddTab(new TabItem { Text = " Home ", RedirectUrl = "" }); BasicTabControl1.AddTab(new TabItem { Text = " Features ", RedirectUrl = "" }); BasicTabControl1.AddTab(new TabItem { Text = " Download ", RedirectUrl = "", Tooltip = "Some tooltip" }); // Selects the first tab by default BasicTabControl1.SelectedTab = 0; BasicTabControl1.UrlTarget = "_blank";
- Save the changes to the web form and its code behind file.
- Right-click the web form in the Solution explorer and select View in Browser.
The resulting page displays a tab menu.
Configuration
You can set the following properties for the BasicTabControl:
Appearance and styling
The appearance of the BasicTabControl control is determined by CSS classes. You can use the following CSS classes to modify the design of the control: | https://docs.xperience.io/k8/references/kentico-controls/basic-controls/basictabcontrol | 2021-09-16T18:26:11 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.xperience.io |
:
You can learn more about multiple node instances below.
If a task or process node is delayed for any reason, you can automatically take a number of actions to solve the problem.
Available escalations include:
Escalations can be triggered manually, or by configuring an Escalation Timer Event.
Right-click a node on the designer canvas and point to Escalations. Click Escalations). — or —
In the Configure [node] dialog box, click the Escalations tab. Any existing node Escalations are displayed.
Click the Add Escalation button
. The Level 1 Escalation options appear. (You can add multiple levels of escalation actions.)
Set the timer for the Escalation by clicking the Configure link in the Configure the Timer Event options. The Timer Event dialog box is displayed.
escalation_timer_#) that you can change if desired. Click the Setup tab. The Configure Timer Event and Timer Conditions group boxes are displayed.
In the Configure Timer Event group box, use the Timer Escalation Activation group of options to set a timed delay (either by entering a number or using a logical expression). This option is selected by default. To enter a timed delay, type a number in the field provided (or click the Expression Editor button to use an expression). Select Minute(s), Hour(s), Day(s), or Month(s) from the time-span list.
(Optional) To keep timer events from counting weekends, use the caladddays() function. For example, if you want to trigger an escalation four days after the start of the node – excluding weekends – select the Escalate at the date and time… option. Type the following expression in the text field.
Any non-working days defined in the Process Calendar are excluded from the escalation timer when the caladddays function is used. (Weekends are specified as non-working days on the process calendar by default.) See Process Calendar Settings..
Type the body of your escalation message where the
Task Escalation Notice text is displayed.
See the Send Message Event help topic for more information on configuring a message escalation.
You can execute any activity multiple times in the same process flow by using the Multiple Node Instances (MNI) functionality.
For example, you might spawn multiple tasks for the same activity when:
While using any process nodes other than the subprocess node, the same activity can only be activated up to 1,000 times. Additional instances can be allowed if designated by your server administrator. For more information about configuring the maximum activity instance value, see Post-Install Configurations.
For subprocess nodes, you can run more instances than the configured Maximum Activity Instances Value up to 150,000 instances. This can be helpful if you are using robotic process automation (RPA) and have a robotic task in a subprocess that needs to run more than 1000 times..
When multiple instances are configured for an activity, three lines display at the bottom of the node icon.
Multiple activity instances can be executed sequentially or in parallel. Re-execution of an activity creates a new instance, even if the previous instance is not finished processing.
Parallel Execution: All instances are activated simultaneously. They do not have to complete in the same order they were activated.
Sequential Execution: The instances must be completed in the same order that they were activated.
The steps for configuring multiple node instances are listed in the Other Tab of the node's properties dialogue box.
At times, you may need to execute an activity repeatedly by creating a loop in the process flow and routed the process flow back to the same activity to execute it again. This type of process activity execution is referred to as a looping process flow. configure a looping process flow to spawn based on a definite number of instances or base the number on a process variable.
NOTE: If a process flow reaches a node configured to spawn a number of instances based on an array length or PV of type Number - Integer and the value at the time is empty, null, or zero, the process will pause by exception and throw an error requiring you to resume the process as needed. This type of process flow can be useful when using a subprocess to tabulate report data. When the activity is a subprocess, a new process is started each time the activity is activated.
You can determine the number of times that a flow token repeats a loop by placing a gateway in the loop.
See also: Gateways and Script Tasks).
CAUTION: box, box. automatically made:
To configure an attended node as a Quick Task, right-click the node in the designer canvas and select Properties. The General tab of the dialog that opens.
You can also add a Quick Task when editing a running process.: is displayed (PVs). In such cases, it is possible to have one instance of a flow overwrite the value in another. To avoid this problem, shield the value passed between each node (in a multi-instance flow) using the Keep process variables synchronized option on the Flow Properties dialog box, which is configured by double-clicking the flow connector on your process model.
When the Keep process variables synchronized option is selected, all PV PVs in outgoing flows are protected from being overwritten by each other.
The overwrite-protection is distinct for each instance, but it does not place a lock on the process variable (meaning that other flows can still write to the PV if you have another using the same variables). This feature also triggers an update to the PV prior to following the flow.
All PVs used during the output phase of a node are shielded from overwrites, when this option is enabled. These include PVs used by expressions in Rules, Timers, and other event conditions.
When the Keep process variables synchronized option is selected, if a PV is used in the Output phase of node execution, it is also protected from overwrites across any subsequently shielded flows. The following nodes frequently consume PVs during the Output phase.
Rule Event: When a protected flow arrives at a Rule Event node, the Rule Conditions are evaluated immediately. If the Rule Event is false (and does not execute) the protected PV value is discarded.
Timer Event: A Timer Event might be triggered during Output; however, if a Timer Event condition is false (and does not execute) the protected PV value is discarded. A true timer event PV that is consumed during the Output phase will be shielded from overwrites.
Gateways: When a protected flow arrives at a gateway, the expressions configured for the node are evaluated immediately. The protected PV value is also retained and made available to any subsequent flows.
Nodes with multiple incoming flows: When processes include gateways with multiple incoming flows with only one outgoing flow, only the PV values carried by a winning incoming flow are passed on to the subsequent node.
Scheduled Nodes: When a protected flow arrives at scheduled node, the protected value is stored until the node executes.
Receive Message Events: Regardless of the configuration setting, PVs will always synchronize across flows on the output flow for a Receive Message Event node.
Protected PV values also keep their shielding when associated with a reassigned task. Any subsequent assignees are supplied the same data as the initial assignee.
As the synchronized option only applies to process flows, you might need to write the value from one shielded flow to a Node Input, then to a Node Output, then to the next flow. This would ensure that the PV value cannot be overwritten during node execution.
If you need to shield more than one flow, we recommend that you put the related nodes into a subprocess.
During the Output phase, a PV's value is written from the Node Output at the same time it is passed to the outgoing flow. You can further shield the PV value in other flows by creating a new PV from the existing value.
To shield a process variable's value through the Start, Run, and Output phases of a node's execution:
Type an expression in the Value field, using the following syntax.:
Formstab from under the task properties.
Outputssection under the
Datatab. A new result Submission Location of type LocationResult is now available.
Submission Locationoutput. From the Result Properties section, click on the
New process variableicon.
When the user first runs any location-enabled task from the mobile application, a one-time permission prompt is displayed requesting the user to grant location access to the application. User location is tracked and automatically submitted with the task form only if the user grants permission.
User location is only captured when the task is submitted from Appian Mobile application. To protect user privacy, a banner is displayed.
Script Taskfollowing the
Home Inspectiontask.
Outputssection under the
Datatab and click on the
New Custom Output.
Expressionbox from under
Expression Proeprties, | https://docs.appian.com/suite/help/21.3/Process_Model_Recipes.html | 2021-09-16T19:35:00 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.appian.com |
You don't have limitation adding Targets, to add a new target you can open the Menu Target/New Target.
Here we can fill all about the target, like the Target Name, multiples Root Domains, Bug Bounty Program URL with the description of the program, like and more.
The only required fields are the Target Name and at least one Root Domain, the other fields are optional.
Target Name example: Yahoo, GitLab, Shopify
Root Doamin example: yahoo.com, gitlab.com, shopify.com
After you add the Target you can navigate to the Target and Root Domains either using the Target Menu or the list of Targets in the Home page.
Each Root Domain contains the tags Subdomains, Agents, Notes, and General.
Subdomains tag allow you to see the list of subdomains below to Root Domain after running any Agent like Sublist3r. To know more about Subdomains check this link.
Agents Tag contains the list of Agents added and there you can run the Agents, see the Terminal, the Logs, and Stop the Agent running. To know more about the Agents check this link.
Notes tag allows you to add notes about the Target. To know more about the Notes check this link.
General Tag contains multiples entries about Target like the number of subdomains, the number of Agents, etc. We are going to continue adding different metrics there.
We can edit the Target and remove it with all the subdomains and services below to that Target going directly to the Target. | https://docs.reconness.com/target/add-target | 2021-09-16T17:51:28 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.reconness.com |
WP Job Openings PRO is an add-on plugin with a pack of features that makes WP Job Openings a powerful recruitment tool.
The plugin helps to reduce the time spent on administrative tasks while hiring, keep track of all applicants, resumes and notes. Engage your candidates more in the process by sending notifications such as welcome and rejection emails.
WP Job Openings PRO or ‘The PRO Pack’ makes the hiring process faster and simpler. Using the plugin you can set up, list and start accepting applications within a matter of minutes.
Build your own job application form
Shortlist, Reject and Select Applicants
Rate and Filter Applications
Custom Email Notifications & Templates
Notes and Activity Log | https://docs.wpjobopenings.com/pro-pack-for-wp-job-openings/introduction | 2021-09-16T18:14:48 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.wpjobopenings.com |
ESP32-DevKitM-1¶
This user guide will help you get started with ESP32-DevKitM-1 and will also provide more in-depth information.
ESP32-DevKitM-1 is an ESP32-MINI-1(1U) document consists of the following major sections:
Getting started: Provides an overview of the ESP32-DevKitM-1 and hardware/software setup instructions to get started.
Hardware reference: Provides more detailed information about the ESP32-DevKitM-1’s hardware.
Related Documents: Gives links to related documentaiton.
Getting Started¶
This section describes how to get started with ESP32-DevKitM-1. It begins with a few introductory sections about the ESP32-DevKitM-1, then Section Start Application Development provides instructions on how to do the initial hardware setup and then how to flash firmware onto the ESP32-DevKitM-1.
Overview¶
This is a small and convenient development board that features:
ESP32-MINI-1, or ESP32-MINI-1U module
USB-to-serial programming interface that also provides power supply for the board
pin headers
pushbuttons for reset and activation of Firmware Download mode
a few other components
Contents and Packaging¶
Retail orders¶
If you order a few samples, each ESP32-DevKitM following figure and the table below describe the key components, interfaces and controls of the ESP32-DevKitM-1 board. We take the board with a ESP32-MINI-1 module as an example in the following sections.
Start Application Development¶
Before powering up your ESP32-DevKitM-1, please make sure that it is in good condition with no obvious signs of damage.
Required Hardware¶
ESP32-DevKitM-1.
Attention
ESP32-DevKitM-1 is a board with a single core module, please enable single core mode (CONFIG_FREERTOS_UNICORE) in menuconfig before flashing your applications.
Hardware Reference¶
Block Diagram¶
A block diagram below shows the components of ESP32-DevKitM-1 and their interconnections.
Power Source Select¶
There are three mutually exclusive ways to provide power to the board:
Micro USB port, default power supply
5V and GND header pins
3V3 and GND header pins
Warning
The power supply must be provided using one and only one of the options above, otherwise the board and/or the power supply source can be damaged.
Power supply by micro USB port is recommended.
Pin Descriptions¶
The table below provides the Name and Function of pins on both sides of the board. For peripheral pin configurations, please refer to ESP32 Datasheet. | https://docs.espressif.com/projects/esp-idf/en/latest/esp32/hw-reference/esp32/user-guide-devkitm-1.html | 2021-09-16T18:40:59 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.espressif.com |
Date: Wed, 4 May 2011 12:13:41 -0400 From: "PowerMath" <[email protected]> To: "questions" <[email protected]> Subject: PowerMath Newsletter Message-ID: <[email protected]>
Next in thread | Raw E-Mail | Index | Archive | Help
PowerMath Newsletter =20 =20 Mathematics Made Easy =20 May 2011 =20 =20 Animated Lesson-Shows : Courses in Arithmetic, Algebra, Geometry, Trigonometry, Pre-Calculus, = Calculus, Probability and Statistics covering Middle-School, High-School and University level Mathematics, = for Students and Instructors. =20 PowerMath has been featured on among others . . . =20 =20 Exclusion: If you no longer wish to receive email from PowerClassroom Software in= voke < delete > Please do NOT reply to this e-mail if you wish to un= subscribe, instead use the instructions above. Any/all information c= ollected from our customers will not be sold, shared, or rented.=20
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=395238+0+archive/2011/freebsd-questions/20110508.freebsd-questions | 2021-09-16T18:59:24 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.freebsd.org |
CertOpenSystemStoreW function (wincrypt.h)
The CertOpenSystemStore function is a simplified function that opens the most common system certificate store. To open certificate stores with more complex requirements, such as file-based or memory-based stores, use CertOpenStore.
Syntax
HCERTSTORE CertOpenSystemStoreW( HCRYPTPROV_LEGACY hProv, LPCWSTR szSubsystemProtocol );
Parameters
hProv
This parameter is not used and should be set to NULL.
Windows Server 2003 and Windows XP: A handle of a cryptographic service provider (CSP). Set hProv to NULL to use the default CSP. If hProv is not NULL, it must be a CSP handle created by using the CryptAcquireContext function.This parameter's data type is HCRYPTPROV.
szSubsystemProtocol
A string that names a system store. If the system store name provided in this parameter is not the name of an existing system store, a new system store will be created and used. CertEnumSystemStore can be used to list the names of existing system stores. Some example system stores are listed in the following table.
Return value
If the function succeeds, the function returns a handle to the certificate store.
If the function fails, it returns NULL. For extended error information, call GetLastError.
Remarks
Only current user certificates are accessible using this method, not the local machine store.
After the system store is opened, all the standard certificate store functions can be used to manipulate the certificates.
After use, the store should be closed by using CertCloseStore.
For more information about the stores that are automatically migrated, see Certificate Store Migration.
Examples
The following example shows a simplified method for opening the most common system certificate stores. For another example that uses this function, see Example C Program: Certificate Store Operations.
//-------------------------------------------------------------------- // Declare and initialize variables. HCERTSTORE hSystemStore; // system store handle //-------------------------------------------------------------------- // Open the CA system certificate store. The same call can be // used with the name of a different system store, such as My or Root, // as the second parameter. if(hSystemStore = CertOpenSystemStore( 0, "CA")) { printf("The CA system store is open. Continue.\n"); } else { printf("The CA system store did not open.\n"); exit(1); } // Use the store as needed. // ... // When done using the store, close it. if(!CertCloseStore(hSystemStore, 0)) { printf("Unable to close the CA system store.\n"); exit(1); }
Note
The wincrypt.h header defines CertOpenSystemStore
CertAddEncodedCertificateToStore
CertGetCRLContextProperty
Certificate Store Functions | https://docs.microsoft.com/en-us/windows/win32/api/wincrypt/nf-wincrypt-certopensystemstorew | 2021-09-16T18:45:40 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.microsoft.com |
Date: Sat, 11 Feb 2012 16:45:07 -0500 From: Michael Powell <[email protected]> To: [email protected] Subject: Re: Can clang compile RELENG_9? Message-ID: <[email protected]> References: <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
Dennis Glatting wrote: > I get errors when trying to compile RELENG_9 with clang. Is clag suppose > to work when it comes to compiling the OS or am I missing something: [snip] I can't speak to RELENG_9, but I have successfully rebuilt the RELEASE with CLANG (make/install world kernel). My /etc/make.conf as per instructions I found on the wiki: = This was with amd64, have not tried any 32 bit. With custom kernel as well. -Mike
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=1051077+0+archive/2012/freebsd-questions/20120212.freebsd-questions | 2021-09-16T19:59:22 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.freebsd.org |
Set-Az
Network Security Group
Updates a network security group.
Syntax
Set-Az
Network Security Group -NetworkSecurityGroup <PSNetworkSecurityGroup> [-AsJob] [-DefaultProfile <IAzureContextContainer>] [-WhatIf] [-Confirm] [<CommonParameters>]
Description
The Set-AzNetworkSecurityGroup cmdlet updates a network security group.
Examples
Example 1: Update an existing network security group
PS C:\>Get-AzNetworkSecurityGroup -Name "Nsg1" -ResourceGroupName "Rg1" | Add-AzNetworkSecurityRuleConfig -Name "Rdp-Rule" -Description "Allow RDP" -Access "Allow" -Protocol "Tcp" -Direction "Inbound" -Priority 100 -SourceAddressPrefix "Internet" -SourcePortRange "*" -DestinationAddressPrefix "*" -DestinationPortRange "3389" | Set-AzNetworkSecurityGroup
This command gets the Azure network security group named Nsg1, and adds a network security rule named Rdp-Rule to allow Internet traffic on port 3389 to the retrieved network security group object using Add-AzNetworkSecurityRuleConfig. The command persists the modified Azure network security group using Set-AzNetworkSecurityGroup.
Parameters
Run cmdlet in the background
Prompts you for confirmation before running the cmdlet.
The credentials, account, tenant, and subscription used for communication with azure.
Specifies a network security group object representing the state to which the network security group should be set.
Shows what would happen if the cmdlet runs. The cmdlet is not run. | https://docs.microsoft.com/en-us/powershell/module/az.network/set-aznetworksecuritygroup?view=azps-6.4.0 | 2021-09-16T20:12:18 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.microsoft.com |
5. Numerical integration of the HH model of the squid axon¶
Book chapters
See Chapter 2 Section 2 on general information about the Hodgkin-Huxley equations and models.
Python classes
The
hodgkin_huxley.HH module contains all code required for this exercise. It implements a Hodgkin-Huxley neuron model.
At the beginning of your exercise solutions, import the modules and run the demo function.
%matplotlib inline import brian2 as b2 import matplotlib.pyplot as plt import numpy as np from neurodynex3.hodgkin_huxley import HH from neurodynex3.tools import input_factory HH.getting_started()
5.1. Exercise: step current response¶
We study the response of a Hodgkin-Huxley neuron to different input currents. Have a look at the documentation of the functions
HH.simulate_HH_neuron() and
HH.plot_data() and the module
neurodynex3.tools.input_factory.
5.1.1. Question¶
What is the lowest step current amplitude \(I_{min}\) for generating at least one spike? Determine the value by trying different input amplitudes in the code fragment:
current = input_factory.get_step_current(5, 100, b2.ms, I_min *b2.uA) state_monitor = HH.simulate_HH_neuron(current, 120 * b2.ms) HH.plot_data(state_monitor, title="HH Neuron, minimal current")
5.2. Exercise: slow and fast ramp current¶
The minimal current to elicit a spike does not just depend on the amplitude \(I\) or on the total charge \(Q\) of the current, but on the “shape” of the current. Let’s see why:
5.2.1. Question¶
Inject a slow ramp current into a HH neuron. The current has amplitude
0A at t in [0, 5] ms and linearly increases to an amplitude of
12.0uAmp at
t=ramp_t_end. At
t>ramp_t_end, the current is set to
0A. Using the following code, reduce
slow_ramp_t_end to the maximal duration of the ramp current, such that the neuron does not spike. Make sure you simulate system for at least 20ms after the current stops.
- What is the membrane voltage at the time when the current injection stops (
t=slow_ramp_t_end)?
b2.defaultclock.dt = 0.02*b2.ms slow_ramp_t_end = 60 # no spike. make it shorter slow_ramp_current = input_factory.get_ramp_current(5, slow_ramp_t_end, b2.ms, 0.*b2.uA, 12.0*b2.uA) state_monitor = HH.simulate_HH_neuron(slow_ramp_current, 90 * b2.ms) idx_t_end = int(round(slow_ramp_t_end*b2.ms / b2.defaultclock.dt)) voltage_slow = state_monitor.vm[0,idx_t_end] print("voltage_slow={}".format(voltage_slow))
5.2.2. Question¶
Do the same as before but for a fast ramp current: The maximal amplitude at
t=ramp_t_end is
4.5uAmp. Start with
fast_ramp_t_end = 8ms and then increase it until you observe a spike.
Note: Technically the input current is implemented using a
TimedArray. For a short, steep ramp, the one millisecond discretization for the current is not high enough. You can create a finer resolution by setting the parameter
unit_time in the function
input_factory.get_ramp_current() (see next code block).
- What is the membrane voltage at the time when the current injection stops (
t=fast_ramp_t_end)?
b2.defaultclock.dt = 0.02*b2.ms fast_ramp_t_end = 80 # no spike. make it longer fast_ramp_current = input_factory.get_ramp_current(50, fast_ramp_t_end, 0.1*b2.ms, 0.*b2.uA, 4.5*b2.uA) state_monitor = HH.simulate_HH_neuron(fast_ramp_current, 40 * b2.ms) idx_t_end = int(round(fast_ramp_t_end*0.1*b2.ms / b2.defaultclock.dt)) voltage_fast = state_monitor.vm[0,idx_t_end] print("voltage_fast={}".format(voltage_fast))
5.2.3. Question¶
Use the function
HH.plot_data() to visualize the dynamics of the system for the fast and the slow case above. Discuss the differences between the two situations. Why are the two “threshold” voltages different? Link your observation to the gating variables \(m\), \(n\), and \(h\). Hint: have a look at Chapter 2 Figure 2.3.
5.3. Exercise: Rebound Spike¶
A HH neuron can spike not only if it receives a sufficiently strong depolarizing input current but also after a hyperpolarizing current. Such a spike is called a rebound spike.
5.3.1. Question¶
Inject a hyperpolarizing step current
I_amp = -1 uA for 20ms into the HH neuron. Simulate the neuron for 50 ms and plot the voltage trace and the gating variables. Repeat the simulation with
I_amp = -5 uA What is happening here? To which gating variable do you attribute this rebound spike?
5.4. Exercise: Brian implementation of a HH neuron¶
In this exercise you will learn to work with the Brian2 model equations. To do so, get the source code of the function
HH.simulate_HH_neuron() (follow the link to the documentation and then click on the [source] link). Copy the function code and paste it into your Jupyter Notebook. Change the function name from
simulate_HH_neuron to a name of your choice. Have a look at the source code and find the conductance parameters
gK and
gNa.
5.4.1. Question¶
In the source code of your function, change the density of sodium channels. Increase it by a factor of 1.4. Stimulate this modified neuron with a step current.
- What is the minimal current leading to repetitive spiking? Explain.
- Run a simulation with no input current to determine the resting potential of the neuron. Link your observation to the Goldman–Hodgkin–Katz voltage equation.
- If you increase the sodium conductance further, you can observe repetitive firing even in the absence of input, why? | https://neuronaldynamics-exercises.readthedocs.io/en/latest/exercises/hodgkin-huxley.html | 2021-09-16T19:27:39 | CC-MAIN-2021-39 | 1631780053717.37 | [] | neuronaldynamics-exercises.readthedocs.io |
Date: Thu, 11 Aug 2016 10:09:24 -0300 From: "Dr. Rolf Jansen" <[email protected]> To: [email protected] Subject: Re: your thoughts on a particualar ipfw action. Message-ID: <[email protected]> In-Reply-To: <20160811200425>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
> Am 11.08.2016 um 08:06 schrieb Ian Smith <[email protected]>: > On Wed, 10 Aug 2016 -0300, Dr. Rolf Jansen wrote: >=20 > . >>> Am 08.08.2016 um 18:46 schrieb Dr. Rolf Jansen <[email protected]>: >>> I am almost finished with preparing the tools for geo-blocking and=20= >>> geo-routing at the firewall for submission to the FreeBSD ports. >=20 >>> I created a man file for the tools, see:=20 >>>, and I added the recent suggestions=20= >>> on rule number/action code per country code, namely, I changed the=20= >>> formula for the x-flag to the suggestion of Ian (value =3D offset +=20= >>> ((C1 - 'A')*26 + (C2 - 'A'))*10), and I added the idea of directly=20= >>> assigning a number to a country code in the argument for the t-flag=20= >>> ("CC=3Dnnnnn:..."). Furthermore, I removed the divert filter daemon=20= >>> from the Makefile. The source is still on GitHub, though, and can be=20= >>> re-vamped if necessary. Now I am going to prepare the Makefile for >>> the port. >=20 > Terrific work, Rolf! Something for everyone, although I'm guessing = the=20 > pf people are going to want a piece of the action, if they need any = more=20 >. >> I just submitted a PR asking to add the new port = 'sysutils/ipdbtools'. >> >=20 > Wonderful. The port maintainers were really quick. The port has been accepted and = has been already committed. >> I needed to change the name of the geoip tool, because GeoIP=AE is a >> registered trademark of MaxMind, Inc., see. The name=20= >=20 > I did wonder about that .. >=20 >> of the tool is now 'ipup' =3D abbreviated form of IP geo location = table=20 >> generation and look- UP , that is without the boring middle part :-D >>=20 >> Those, who used geoip already in some scripts, please excuse the >> inconvenience of needing to change the name. >=20 >> With the great help of Julian, I was able to improve the man file and >> the latest version can be read online: >>=20 >> >=20 > Nice manual and all. A few typos noted below (niggly Virgo = proofreader) I was tempted to get these last changes into my PR, but I am sorry, it = was too late for the initial release. I committed the corrected man file = to the GitHub repository, though, it will automatically go into the next = release of the ipdbtools, perhaps together with some additions for using = it together with pf(8) and route(8). > I must apologise for added exasperation earlier. I was tending = towards=20 > conflating several other ipfw issues under discussion (named states, = new=20 > state actions, and this). Sorry if I bumped you off course = momentarily,=20 > though I don't seem to have slowed you down too much .. Nothing, to be sorry about. I like discussions. > As a hopefully not unwelcome aside, it's a pity that IBM, of all = people,=20 > couldn't manage geo-blocking successfully for the Australian Census = the=20 > other night. Next time around we can offer them a working = geo-blocking=20 > firewall/router for a good deal less than the AU$9.6M we've paid IBM = :) >=20 > Census: How the Government says the website meltdown unfolded: > = ed/7712964 >=20 > A more tech-savvy article than ABC or other news media managed so far: > = stralian-census-shambles-explanation-depends-on-who-you-ask Well, I tend to believe that this has nothing to do with DoS attacks,. Who in the bureaucrats hell told them to go with one deadline for = everybody? For the census in Australia, I would have told the citizens = that everybody got an individual deadline which is his or her birthday = in 2016 -- problem solved. > =3D=3D=3D=3D=3D=3D=3D >=20 > It is suitable for inclusion into cron. "for invocation by cron" = maybe? OK, "invocation by" sounds better (for me) > ipdb_update.sh has IPRanges=3D"/usr/local/etc/ipdb/IPRanges" but some = (not=20 > all) mentions in the manpage use "IP-Ranges" with a hyphen, including=20= > the FILES section. Also the last one there repeats "*bst.v4" for = IPv6. OK, corrected >." > "from certain [countries?] we don't like .." OK > "piped into sort of [or?] a pre-processing command .." OK, I removed "sort of", leaving "... piped into a pre-processing = command ..." >=20 > =3D=3D=3D=3D=3D=3D=3D As already said, the corrections are not part of the initial release = into the FreeBSD ports, for this one it was too late. The man file on = GitHub is corrected already. Best regards Rolf
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=18128+0+archive/2016/freebsd-ipfw/20160814.freebsd-ipfw | 2021-09-16T18:05:04 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.freebsd.org |
Programming devices
This document includes tips for installing a specific version of Device OS, typically during manufacturing, but can also be used for developer devices and testing.
There are a number of components that may need to be upgraded on a device, in addition to user firmware:
1There is only one version of the Argon NCP firmware in production, so it does not need to upgraded.
2The Tracker SoM NCP firmware was upgraded in Device OS 3.0.0, thus there are two versions in the field. It is not necessary or desirable to ever downgrade the Tracker NCP firmware, as the 3.0.0 version is backward compatible with earlier versions. See Argon and Tracker NCP for more information.
There are a number of ways the Device OS version can be upgraded:
OTA (over-the-air)
OTA is how most devices are upgraded in the field. This can also be done during manufacturing, either on the manufacturing line, or by the initial user, however it's more common to upgrade using one of the other methods, below, especially for cellular devices.
OTA will only ever upgrade the Device OS version; it will not downgrade. Device OS is generally backward compatible. If you built your user firmware targeting, say, 1.5.2, it would likely run correctly on Device OS 2.1.0. However, because of differences between versions, this is not always possible. Thus if you are relying on OTA, it is possible that devices from the factory could contain a newer version than you initially developed for.
USB (particle update)
Using the
particle update command in the Particle CLI is the most common way that end-users upgrade their devices, however there are a number of caveats:
- The version that is installed by the update command is built into a specific version of the CLI. In the absence of updating the CLI, it will always install that version, even if there is a newer version available. However, for manufacturing, this is likely the behavior that you want.
- The latest in the LTS branch (2.x, for example) is installed by the latest version of the CLI.
- The feature branch (3.x, for example) is never installed by the CLI. Thus if you require Device OS 3.x, then you cannot use the
particle updatemethod.
- The NCP (network coprocessor) is not upgraded by
particle update. This currently only affects Tracker One/Tracker SoM devices being upgraded to Device OS 3.0.0 or later.
Installing a specific version of the CLI
It's recommended that you first install the current version of the CLI using the CLI installer. This is necessary to make sure the application dependencies such as dfu-util are installed, as well as any required device drivers for Windows and a udev rule for Linux.
Then locate the version of the CLI you want in the particle-cli Github releases. For example, if you wanted Device OS 1.5.2, you'd want particle-cli v2.6.0. Expand Assets and download the .zip or .tar.gz for the source and extract it.
From the Terminal or Command Prompt window,
cd into the directory and install dependencies. For example:
cd ~/Downloads/particle-cli-2.6.0 npm install
To run commands using this specific version of the Particle CLI, instead of using the
particle command, instead use
npm start in this directory with the same command line options. For example:
npm start version npm start help npm start login npm start list npm start update
USB (Particle CLI, manually)
It is also possible to use the Particle CLI to manually program the device, which provides the most flexibility at the expense of a more complicated script. The recommended flow is:
- The device should be in listening mode (blinking dark blue). If not, use
particle usb start-listening.
- You may want to capture the Device ID and ICCID using
particle identify.
- Flash the bootloader using
particle flash --serial.
- Flash the NCP (Tracker with 3.x only) using
particle flash --serial.
- Put the device in DFU mode (blinking yellow) using
particle usb dfu.
- Flash the SoftDevice (Gen 3 only) using
particle flash --usb.
- Program system-parts in numerical order using
particle flash --usb
- Program the user firmware using
particle flash --usb
- Mark setup done (Gen 3) using
particle usb setup-done
You can download the necessary files for several common Device OS releases as a zip file for several common Device OS releases here:
It is recommended that you use the latest in release line. For example, if you are using 1.5.x, use 1.5.2 instead of 1.5.0. Ideally, you should be using the latest LTS release (2.1.0, for example), unless you need features in a feature release (3.1.0, for example).
All versions are available in Github Device OS Releases.
1 It's technically possible to flash the bootloader in DFU mode, however the process is complicated. Device Restore over USB uses this technique, however the CLI only supports this during
particle update and not when manually flashing the bootloader. It requires two dfu-util commands that vary between devices and resetting the device.
2 While it's possible to flash system parts in listening mode (--serial), using DFU mode is generally more reliable. If you are downgrading in --serial mode, there are also additional restrictions, as the system parts must be flashed in reverse numerical order. Also, you can run into a situation where the device reboots too early in --serial mode, and completes the upgrade OTA, which defeats the purpose of flashing over USB first.
USB (web-based)
Device Restore - USB is a convenient way to flash a specific version of Device OS, bootloader, SoftDevice, and user firmware onto a device over USB. It's normally used for individual developers, not manufacturing.
- There is limited browser support on desktop: Chrome, Opera, and Edge. It does not work with Firefox or Safari. Chrome is recommended.
- It should work on Chromebook, Mac, Linux, and Windows 10 on supported browsers.
- It does not require any additional software to be installed, but does require Internet access to download the requested binaries.
USB (dfu-util)
It is possible to directly script the dfu-util application to flash system parts and user firmware. However, this is not usually an ideal solution is that you can't easily flash the bootloader using dfu-util, and the commands are complicated. Also, without the Particle CLI, you'd have to manually switch the device modes between DFU and listening mode using buttons, which is tedious at best.
SWD/JTAG
SWD/JTAG is the recommended method for upgrading (or downgrading) Device OS, bootloader, Soft Device, and user firmware on devices where it is available. It requires a compatible SWD/JTAG programmer:
Device Compatibility - SWD/JTAG
The Tracker SoM does not contain a 10-pin SWD debugging connector on the SoM itself, but is exposed on the pads of the SoM and the connector could be added to your base board.
The 10-pin SWD debugging connector Tracker One is not easily accessible, as it not only requires opening the case, which would void the warranty and possibly affect the IP67 waterproof rating, but also the connector is not populated on the board (there are bare tinned pads where the SMD connector would be).
The B Series SoM does not contain the 10-pin SWD debugging connector on the SoM. There are pads on the bottom of the SoM that mate with pogo pins on the B Series evaluation board, which does have a 10-pin SWD debugging connector. You can either temporarily mount the SoM in a test fixture with a debugging connector, include the connector on your board, or use other methods.
The Boron and Argon both have 10-pin SWD debugging connectors on the Feather device.
The Electron, E Series, Photon and P1 have SWD on pins D7, D5, and optionally RESET. If these pins are available, you can program it by SWD. However, you may need to be able to change the device mode, so access to the MODE button, or to USB, may be helpful.
Hex files
If you want to use SWD/JTAG see the JTAG Reference. The most common method is to generate an Intel Hex File (.hex) containing all of the binaries, including your user firmware binary.
Using the Hex File Generator, you can take one of the base restore images, replace Tinker with your own user firmware, and download the resulting hex file. This makes it easy to flash devices with known firmware quickly and easily.
This is an excellent option if your contract manufacturer will be programming your devices as they will likely be able to use the .hex files and a SWD/JTAG programmer to easily reprogram your devices. This can be done with the standard JTAG programmer software and does not require the Particle toolchains or Particle CLI be installed. | https://docs.particle.io/reference/developer-tools/programming-devices/ | 2021-09-16T19:21:41 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.particle.io |
You can set up your own package registry server if you want to control package access to a limited number of users, or if you need to set up package registry servers in a closed network organization.
When you have finished developing your package and you want to share it with other users, you have a number of different options: | https://docs.unity3d.com/2020.3/Documentation/Manual/cus-share.html | 2021-09-16T18:58:03 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.unity3d.com |
Orders
In the Kentico E-commerce Solution, orders of products can be placed by both.
Additionally, you can track the life cycle of your store's orders through customizable order statuses.
Managing orders
The usual scenario is that your customers, both registered and anonymous, place their orders on the live site while going through the checkout process. existing ones.
- Save your work.
Recalculating orders
The system allows your on-line store administrators to modify orders placed by your customers. After changing values of selected order properties (while editing a selected order), the order is recalculated.
The system recalculates orders after:
- changing the shipping option (Shipping tab)
- adding product items (Items tab)
- changing the number of ordered product items (Items tab)
- removing product items (Items tab)
- adding or removing a coupon code (Items tab)
The system does not recalculate orders after modifying the billing address or shipping address, changing the.
Important
The system does NOT update the Unit price of order items when recalculating orders (to preserve the original unit price calculated when the customer made the order). If you make an order change that affects the unit price of an item (due to catalog or volume discounts), you need to edit the given item's unit price manually for the order (see Modifying order items).
For example, if you increase the number of ordered units to an amount that fulfills the conditions of a volume discount and Update the order, the system does not automatically reduce the unit price. If you wish to reduce the item's price, you need to edit the given order item on the Items tab, set the reduced Unit price according to the given volume discount, click Save & Close and then click OK.
Discount validity
When recalculating existing orders, the system evaluates the validity of discounts using the date and time when the order was created, not the current time. Expired discounts remain valid when recalculating orders that were created before the discount's Valid to date.
However, if a discount was manually disabled or deleted, the system removes it from orders during the recalculation.
Capturing payments
With some payment methods, the initial payment done by the customer only performs authorization (places a hold on the transaction amount), without actually transferring funds. The merchant finishes the transaction at a later time by capturing the payment, typically after the ordered products are physically shipped.
For example, the default PayPal and Authorize.Net payment gateways can be configured to use delayed capture transactions.
To capture the payment for an order:
Note: Capturing of funds is only possible for orders with a successfully authorized payment (that has not expired or already been completed).
- Open the Orders application.
- Edit the appropriate order.
- Switch to the Billing tab.
- Click Capture payment.
If the capture transaction is successful, the given payment gateway handles the transfer (settlement) of the authorized funds. The order is then marked as paid in Kentico.
Marking orders as paid
The system can automatically mark).
You can also mark orders as paid directly in the administration interface while editing the orders on the Billing tab:
- Open the Orders application.
- Edit () a selected order.
- Switch to the Billing tab.
- Enable the Order is paid property.
- Click Save.
If an order is marked as paid:
- the system sends to specified email addresses notification emails informing about receiving payment
- purchased memberships become activated
- expiration of purchased e-products starts
- store administrators cannot add product items (Items tab)
- store administrators cannot perform the Update action (Items tab)
- store administrators cannot change the shipping option (Shipping tab)
- store administrators cannot change the payment method (Billing tab)
To be able to modify the disabled order properties, you need to disable the Order is paid property for the order.
If orders are marked as paid immediately upon creation, check whether the first order status (the top status in the Store configuration -> Order status) marks orders
To modify items in an existing order:
- Open the Orders application.
- Edit () a selected order.
- Switch to the Items tab.
Here you can add new order items, change the number of ordered items, and remove order items.
Notes
- You cannot edit items for orders that are marked as paid.
- If you manually edit the Unit price of an order item, the system clears all catalog discounts applied to the given item.
- If the Send order changes by email option is enabled, the system sends a notification email informing about the changes made in the order to relevant addresses (typically to the customer and to the merchant).
-.
Was this page helpful? | https://docs.xperience.io/k11/e-commerce-features/managing-your-store/orders | 2021-09-16T19:36:27 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.xperience.io |
Configuring Dynamic Routes
When a dynamic routing protocol is enabled, the corresponding routing process monitors route updates and advertises routes. Routing protocols enable an upstream router to use the equal cost multipath (ECMP) technique to load balance traffic to identical virtual servers hosted on two standalone Citrix ADC appliances. Dynamic routing on a Citrix ADC appliance uses three routing tables. In a high-availability setup, the routing tables on the secondary appliance mirror those on the primary.
For command reference guides and unsupported commands on dynamic routing protocol, see Dynamic Routing Protocol Command Reference Guides and Unsupported Commands.
The Citrix ADC supports the following protocols:
- Citrix ADCRouting Tables in Citrix ADC
In a Citrix ADC appliance, the Citrix ADC kernel routing table, the FreeBSD kernel routing table, and the NSM FIB routing table each hold a different set of routes and serve a different purpose. They communicate with each other by using UNIX routing sockets. Route updates are not automatically propagated from one routing table to another. You must configure propagation of route updates for each routing table.
NS Kernel Routing Table
The NS kernel routing table holds subnet routes corresponding to the NSIP and to each SNIP and MIP. Usually, no routes corresponding to VIPs are present in the NS kernel routing table. The exception is a VIP added by using the add ns ip command and configured with a subnet mask other than 255.255.255.255. If there are multiple IP addresses belonging to the same subnet, they are abstracted as a single subnet route. In addition, this table holds a route to the loopback network (127.0.0.0) and any static routes added through the CLI (CLI). The entries in this table are used by the Citrix ADC in packet forwarding. From the CLI, they can be inspected with the show route command.
FreeBSD Routing Table
The sole purpose of the FreeBSD routing table is to facilitate initiation and termination of management traffic (telnet, ssh, etc.). In a Citrix ADC appliance, these applications are tightly coupled to FreeBSD, and it is imperative for FreeBSD to have the necessary information to handle traffic to and from these applications. This routing table contains a route to the NSIP subnet and a default route. In addition, FreeBSD adds routes of type WasCloned(W) when the Citrix ADC establishes connections to hosts on local networks. Because of the highly specialized utility of the entries in this routing table, all other route updates from the NS kernel and NSM FIB routing tables bypass the FreeBSD routing table. Do not modify it with the route command. The FreeBSD routing table can be inspected by using the netstat command from any UNIX shell.
Network Services Module (NSM) FIB
The NSM FIB routing table contains the advertisable routes that are distributed by the dynamic routing protocols to their peers in the network. It may contain:
- Connected routes. IP subnets that are directly reachable from the Citrix ADC. CLI that have the - advertise option enabled. Alternatively, if the Citrix ADC is operating in Static Route Advertisement (SRADV) mode, all static routes configured on Citrix ADC
After failover, the secondary node takes some time to start the protocol, learn the routes, and update its routing table. But this does not affect routing, because the routing table on the secondary node is identical to the routing table on the primary node. This mode of operation is known as non-stop forwarding.
Black Hole Avoidance Mechanism
After failover, the new primary node injects all its VIP routes into the upstream router. However, that router retains the old primary node’s routes for 180 seconds. Because the router is not aware of the failover, it attempts to load balance traffic between the two nodes. During the 180 seconds before the old routes expire, the router sends half the traffic to the old, inactive primary node, which is, in effect, a black hole.
To prevent this, the new primary node, when injecting a route, assigns it a metric that is slightly lower than the one specified by the old primary node.
Interfaces for Configuring Dynamic RoutingInterfaces for Configuring Dynamic Routing
To configure dynamic routing, you can use either the GUI or a command-line interface. The Citrix ADC supports two independent command-line interfaces: the CLI and the Virtual Teletype Shell (VTYSH). The CLI is the appliance’s native shell. VTYSH is exposed by ZebOS. The Citrix ADC routing suite is based on ZebOS, the commercial version of GNU Zebra.
Note:
Citrix recommends that you use VTYSH for all commands except those that can be configured only on the CLI. Use of Citrix ADC appliance: Dynamic routing protocol reference guides and unsupported commands. | https://docs.citrix.com/en-us/citrix-adc/13/networking/ip-routing/configuring-dynamic-routes.html | 2019-06-16T06:28:31 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.citrix.com |
The top level organizer in StoriesOnBoard is a workspace. Story maps are created under a workspace. A workspace can have members who are registered users in StoriesOnBoard. Members are added to a workspace by workspace or story map administrators by invitation.
Workspace members can have the following roles:
Observer
- can view contents of the story maps she has access to
- an observer in a workspace can only be a viewer on a story map
Member
- can create story maps if she is allowed to (it's a separated setting for the member if she is allowed to create a story map in the workspace)
- can access story maps that she has access permissions to
Note: Locked out members are still counted as editors and included in the paid plan until their role hasn't changed to observer.
Admin
- can manage story maps (create, delete, manage permissions)
- can manage workspace members (add, remove, set role, give "can create story map" permission)
Subscription admin
- All that an admin can do and
- can manage subscription plan, billing data, payment details
Story maps can be accessed only by those who have the explicit permissions for the given map. Access is given by the workspace administrator or the story map administrator.
Story map collaborators can have the following roles:
Viewer: can open story map and view all its data
Editor: can open story map and edit all its data
Admin
- can open story map and edit all its data
- can delete the story map
- can manage story map's collaborators (add, remove, set role)
- can manage story map's settings (change name, setup integration)
- can manage releases (add, remove, reorder) | http://docs.storiesonboard.com/articles/683251-workspaces-story-maps-user-roles-members-administrators-viewers | 2019-06-16T05:18:00 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.storiesonboard.com |
Data Center
This is a description of the new improved Jira Data Center support starting from the eazyBI version 4.5.0.
The latest eazyBI version includes improved Jira Data Center support which allows to configure eazyBI load across different cluster nodes.
On this page:
Overview
Previous eazyBI versions (when used in a Jira Data Center cluster) distributed eazyBI load randomly across the cluster nodes. The incoming eazyBI report execution requests were performed on a node where the web request was distributed by the load balancer. The background import jobs were randomly executed on some cluster nodes. Many Jira Data Center customers preferred to specify which Jira Data Center cluster nodes are used for CPU and memory intensive eazyBI tasks to minimize the performance impact on other nodes.
The new improved eazyBI for Jira Data Center allows to specify dedicated nodes of the cluster where complex eazyBI tasks (report execution and background import jobs) are executed:
- User requests are distributed to cluster nodes by the load balancer. Each node is running an eazyBI instance which accepts the eazyBI report requests but then make a proxy request to one of the specified eazyBI dedicated nodes.
- The eazyBI dedicated node performs the report request in a separate eazyBI child process and then returns report results to the original request node.
- Both manually initiated as well as scheduled import jobs are executed only on eazyBI dedicated nodes.
For large Jira Data Center installations it is recommended to have a separate cluster node (or several nodes) which are not used by the load balancer for incoming user requests. This separate node can be used both as an eazyBI dedicated node as well as for other non-user requests purposes. In this case, when eazyBI will execute complex requests and perform long data imports it will not affect the performance of other nodes which handle incoming user requests.
Settings
When eazyBI is installed in Jira Data Center then the eazyBI settings page (where you initially specify the database connection) will show a separate Data Center section:
- Dedicated nodes
Select from the list of available cluster nodes which should be used as eazyBI dedicated nodes.
Typically just one dedicated node is enough but for large Jira Data Center installations you can specify several dedicated nodes.
You can also specify (all nodes) if you would like to use all nodes as dedicated eazyBI nodes (this typically is only used when the list of Data Center nodes are changing dynamically all the time).
- Child process
The eazyBI child process will be started only on specified dedicated nodes. Specify additional JVM options if needed (for example, to increase the child process JVM heap max memory).
Please ensure that other nodes can create HTTP connections to the specified port on the dedicated nodes (ensure that a firewall is not blocking these connections).
Starting from the version 4.7.2, if several dedicated nodes are specified then a random node will be selected for each new incoming user request. A selected child process node is cached for 5 minutes for each user (so that sequential requests from the same user are directed to the same dedicated node).
The list of dedicated nodes are stored in the Jira shared home directory in the
eazybi.toml file (where database connection and child process parameters are stored as well).
If the list of eazyBI dedicated nodes is updated then eazyBI instances on each node reconfigure themselves automatically (start or stop the child process and start to process background jobs).
Log files
The latest eazyBI version stores all log files in the shared Jira home directory (the previous versions stored the log files in local Jira home directories of each node). This change was made to enable that log files can be access from each node and enable to create a zip file with all logs for support and troubleshooting purposes.
Each log file name will contain a suffix with the node name which created this file (see the
NODE placeholder below). The following log files will be created in the
log subdirectory of the shared Jira home directory:
eazybi-web-NODE.log– the main log file with incoming web requests.
eazybi-queues-NODE.log– the log file for background import jobs (on dedicated nodes).
eazybi-child-NODE.log– the log file for the child process (on dedicated nodes).
Troubleshooting
The system administration Troubleshooting page can be used to:
- See the status of the child process on the dedicated node.
Starting from the version 4.7.2, Response from host will show from which child process node the status is returned.
If you will press Restart then child processes on all dedicated nodes will be restarted. Use Restart only when you get errors about the child process unavailability or unexpected slow performance and check if the performance is normal again after the restart.
- See the list of all log files from all nodes, see the content of these log files as well as download a zip file with all log files.
Child process and background job log files are shown only for active dedicated nodes.
The system administration Background jobs page can be used to see the status and statistics of background jobs processing on dedicated nodes. Each queue name has a node prefix indicating on which node this queue is processed. The size of background job queues can be modified in the eazyBI advanced settings. | https://docs.eazybi.com/eazybijira/set-up-and-administer/set-up-and-administer-for-jira-server/data-center | 2019-06-16T04:45:26 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.eazybi.com |
A common question we get is, "Why don't you offer phone support?" We hear you! While we don't offer phone support, we've answered this question in detail on our Customer Community: Expensify's Support Methodology.
You can always email us at [email protected] or send us an in-app message from the Help and Feedback (click your user icon to reveal this when using the web app), or 'Ask us anything' in the Help & Feedback section of the mobile app.
Please note: we respond to all customer questions as soon as we're able and in the order in which they were received!
Also, why not check out our Customer Community forum? Here, we offer peer-to-peer support so that you can look for answers and ask questions 24/7, in the case that our Help Center doesn't answer your question or we aren't online to answer your question right away!
Now, you may ask, “What can I do to get the best and quickest answers when writing into Expensify?” Good question! Here are the best guidelines to follow to help us help you:
- If you're emailing, write to us from the email address associated with your Expensify account so we know where to look!
- Ask clear and specific questions, as well as provide specific examples (e.g., email addresses of affected users, report IDs, etc.) Since this is not a live chat, this will truly help us expedite the research of your issue and allow us to speak to any other teams that may be necessary to troubleshoot with you.
- Finally, understand that we’re all trying to do our very best to help, so please be courteous, respectful, and patient. | https://docs.expensify.com/articles/1046086-contact-us | 2019-06-16T04:38:07 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['https://downloads.intercomcdn.com/i/o/67940462/fd7a3fe7965f422d796addce/image.png',
None], dtype=object) ] | docs.expensify.com |
If you would like to make your site memorable and easy to find with a branded custom domain, then you can map any domain you own directly to your Ghost(Pro) publication.
This is setup by adding a CNAME record within your domain's DNS settings and a few extra steps of setup within Ghost admin.
All Ghost(Pro) sites will automatically be provided with an SSL certificate by default, which simplifies the process and ensures you site is secure.
Step 1: Create a CNAME record
The first step to setting a custom domain is to determine whether you want to use your custom domain with Ghost(Pro) as a subdomain or a root domain.
Using a Subdomain
A subdomain is a subdivision of your domain name. For example, if you want to use Ghost(Pro) at blog.ghost.org, “
blog,” would be a subdomain of ghost.org. The most common subdomain is “
www” e.g..
Using a Root Domain
A root domain, also known as a “naked domain,” is a domain without a subdomain in front, e.g. ghost.org is a root domain. Root domains are assigned in DNS records using the “
@,” symbol.
Step 2: Activate the Custom Domain_0<<
Activate your custom domain - this can take anywhere from a few seconds to a few hours due to the length of time your DNS takes to propagate.
Once activated, you can view your publication by going to the custom domain directly from the browser.
Why can't I change my ghost.io URL?
If you have set up a custom domain for your site, your ghost.io URL will be locked to ensure your publication is always available. To change the ghost.io URL of your site, remove your custom domain from the domain settings page.
Summary
For further information about how to setup your custom domain, use the following guides for these popular domain name providers:
For further information about custom domains with Ghost(Pro), reach out to us at [email protected]. | https://docs.ghost.org/faq/using-custom-domains/ | 2019-06-16T05:01:23 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['https://docs.ghost.io/content/images/2018/10/ghostpro-custom-domain.png',
'Activate Custom Domain'], dtype=object) ] | docs.ghost.org |
Data.
Why Replicate and Synchronize Data?
Cloud-hosted applications and services are often deployed to multiple datacenters. This approach can reduce network latency for globally located users, as well as providing a complete failover capability should one deployment or one datacenter become unavailable for any reason. For best performance, the data that an application uses should be located close to where the application is deployed, so it may be necessary to replicate this data in each datacenter. If the data changes, these modifications must be applied to every copy of the data. This process is called synchronization.
Alternatively, you might choose to build a hybrid application or service solution that stores and retrieves data from an on-premises data store hosted by your own organization. For example, an organization may hold the main data repository on-premises and then replicate only the necessary data to a data store in the cloud. This can help to protect sensitive data that is not required in all applications. It is also a useful approach if updates to the data occur mainly on-premises, such as when maintaining the catalog of an e-commerce retailer or the account details of suppliers and customers.
The key decisions in any distributed system that uses data replication concern where you will store the replicas, and how you will synchronize these replicas.
Replicating and Synchronizing Data
There are several topologies that you can use to implement data replication. The two most common approaches are:
Master-Master Replication, in which the data in each replica is dynamic and can be updated. This topology requires a two-way synchronization mechanism to keep the replicas up to date and to resolve any conflicts that might occur. In a cloud application, to ensure that response times are kept to a minimum and to reduce the impact of network latency, synchronization typically happens periodically. The changes made to a replica are batched up and synchronized with other replicas according to a defined schedule. While this approach reduces the overheads associated with synchronization, it can introduce some inconsistency between replicas before they are synchronized.
Figure 1 - Master-Master replication
Master-Subordinate Replication, in which the data in only one of the replicas is dynamic (the master), and the remaining replicas are read-only. The synchronization requirements for this topology are simpler than that of the Master-Master Replication topology because conflicts are unlikely to occur. However, the same issues of data consistency apply.
Figure 2 - Master-Subordinate replication
Benefits of Replication
The following list provides suggestions for achieving the benefits of replicating data:
- To improve performance and scalability:
- Use Master-Subordinate replication with read-only replicas to improve performance of queries. Locate the replicas close to the applications that access them and use simple one-way synchronization to push updates to them from a master database.
- Use Master-Master replication to improve the scalability of write operations. Applications can write more quickly to a local copy of the data, but there is additional complexity because two-way synchronization (and possible conflict resolution) with other data stores is required.
- Include in each replica any reference data that is relatively static, and is required for queries executed against that replica to avoid the requirement to cross the network to another datacenter. For example, you could include postal code lookup tables (for customer addresses) or product catalog information (for an ecommerce application) in each replica.
- To improve reliability:
- Deploy replicas close to the applications and inside the network boundaries of the applications that use them to avoid delays caused by accessing data across the Internet. Typically, the latency of the Internet and the correspondingly higher chance of connection failure are the major factors in poor reliability. If replicas are read-only to an application, they can be updated by pushing changes from the master database when connectivity is restored. If the local data is updateable, a more complex two-way synchronization will be required to update all data stores that hold this data.
- To improve security:
- In a hybrid application, deploy only non-sensitive data to the cloud and keep the rest on-premises. This approach may also be a regulatory requirement, specified in a service level agreement (SLA), or as a business requirement. Replication and synchronization can take place over the non-sensitive data only.
- To improve availability:
- In a global reach scenario, use Master-Master replication in datacenters in each country or region where the application runs. Each deployment of the application can use data located in the same datacenter as that deployment in order to maximize performance and minimize any data transfer costs. Partitioning the data may make it possible to minimize synchronization requirements.
- Use replication from the master database to replicas in order to provide failover and backup capabilities. By keeping additional copies of the data up to date, perhaps according to a schedule or on demand when any changes are made to the data, it may be possible to switch the application to use the backup data in case of a failure of the original data store.
Simplifying Synchronization Requirements
Some of the ways that you can minimize or avoid the complexity of two-way synchronization include:
- Use a Master-Subordinate Replication topology wherever possible. This topology requires only one-way synchronization from the master to the subordinates. You may be able to send updates from a cloud-hosted application to the master database using a messaging service, or by exposing the master database across the Internet in a secure way.
- Segregate the data into several stores or partitions according to the replication requirements of the data that they hold. Partitions containing data that could be modified anywhere can be replicated by using the Master-Master topology, while data that can be updated only at a single location and is static everywhere else can be replicated by using the Master-Subordinate topology.
- Partition the data so that updates, and the resulting risk of conflicts, can occur only in the minimum number of places. For example, store the data for different retail locations in different databases so that synchronization must occur only between the retail location and the master database, and not across all databases. For more information see the Data Partitioning Guidance.
- Version the data so that no overwriting is required. Instead, when data is changed, a new version is added to the data store alongside the existing versions. Applications can access all the versions of the data and the update history, and can use the appropriate version. Many Command and Query Responsibility Segregation (CQRS) implementations use this approach, often referred to as Event Sourcing, to retain historical information and to accrue changes at specific points in time.
- Use a quorum-based approach where a conflicting update is applied only if the majority of data stores vote to commit the update. If the majority votes to abort the update then all the data stores must abort the update. Quorum-based mechanisms are not easy to implement but may provide a workable solution if the final value of conflicting data items should be based on a consensus rather than being based on the more usual conflict resolution techniques such as “last update wins” or “master database wins.” For more information see Quorum on TechNet.
Considerations for Data Replication and Synchronization
Even if you can simplify your data synchronization requirements, you must still consider how you implement the synchronization mechanism. Consider the following points:
- Decide which type of synchronization you need:
- Master-Master replication involves a two-way synchronization process that is complex because the same data might be updated in more than one location. This can cause conflicts, and the synchronization must be able to resolve or handle this situation. It may be appropriate for one data store to have precedence and overwrite a conflicting change in other data stores. Other approaches are to implement a mechanism that can automatically resolve the conflict based on timings, or just record the changes and notify an administrator to resolve the conflict.
- Master-Subordinate replication is simpler because changes are made in the master database and are copied to all subordinates.
- Custom or programmatic synchronization can be used where the rules for handling conflicts are complex, where transformations are required on the data during synchronization, or where the standard Master-Master and Master-Subordinate approaches are not suitable. Changes are synchronized by reacting to events that indicate a data update, and applying this update to each data store while managing any update conflicts that might occur.
- Decide the frequency of synchronization. Most synchronization frameworks and services perform the synchronization operation on a fixed schedule. If the period between synchronizations is too long, you increase the risk of update conflicts and data in each replica may become stale. If the period is too short you may incur heavy network load, increased data transfer costs, and risk a new synchronization starting before the previous one has finished when there are a lot of updates. It may be possible to propagate changes across replicas as they occur by using background tasks that synchronize the data.
- Decide which data store will hold the master copy of the data where this is relevant, and the order in which updates are synchronized when there are more than two replicas of the data. Also consider how you will handle the situation where the master database is unavailable. It may be necessary to promote one replica to the master role in this case. For more information see the Leader Election pattern.
- Decide what data in each store you will synchronize. The replicas may contain only a subset of the data. This could be a subset of columns to hide sensitive or non-required data, a subset of the rows where the data is partitioned so that only appropriate rows are replicated, or it could be a combination of both of these approaches.
- Beware of creating a synchronization loop in a system that implements the Master-Master replication topology. Synchronization loops can arise if one synchronization action updates a data store and this update prompts another synchronization that tries to apply the update back to the original data store. Synchronization loops can also occur when there are more than two data stores, where a synchronization update travels from one data store to another and then back to the original one.
- Consider if using a cache is worthwhile to protect against transient or short-lived connectivity issues.
- Ensure that the transport mechanism used by the synchronization process protects the data as it travels over the network. Typically this means using encrypted connections, SSL, or TLS. In extreme cases you may need to encrypt the data itself, but this is likely to require implementation of a custom synchronization solution.
- Consider how you will deal with failures during replication. This may require rerouting requests for data to another replica if the first cannot be accessed, or even rerouting requests to another deployment of the application.
- Make sure applications that use replicas of the data can handle situations that may arise when a replica is not fully consistent with the master copy of the data. For example, if a website accepts an order for goods marked as available but a subsequent update shows that no stock is available, the application must manage this—perhaps by sending an email to the customer and/or by placing the item on back order.
- Consider the cost and time implications of the chosen approach. For example, updating all or part of a data store though replication is likely to take longer and involve more bandwidth than updating a single entity.
Note
For more information about patterns for synchronizing data see Appendix A - Replicating, Distributing, and Synchronizing Data in the p&p guide Building Hybrid Applications in the Cloud on Microsoft Azure. The topic Data Movement Patterns on MSDN contains definitions of the common patterns for replicating and synchronizing data.
Implementing Synchronization
Determining how to implement data synchronization is dependent to a great extent on the nature of the data and the type of the data stores. Some examples are:
Use a ready-built synchronization service or framework. In Azure hosted and hybrid applications you might choose to use:
The Azure SQL Data Sync service. This service can be used to synchronize on-premises and cloud-hosted SQL Server instances, and Azure SQL Database instances. Although there are a few minor limitations, it is a powerful service that provides options to select subsets of the data and specify the synchronization intervals. It can also perform one-way replication if required.
Note
For more information about using SQL Data Sync see SQL Data Sync on MSDN and Deploying the Orders Application and Data in the Cloud in the p&p guide Building Hybrid Applications in the Cloud on Microsoft Azure. Note that, at the time this guide was written, the SQL Data Sync service was a preview release and provided no SLA.
The Microsoft Sync Framework. This is a more flexible mechanism that enables you to implement custom synchronization plans, and capture events so that you can specify the actions to take when, for example, an update conflict occurs. It provides a solution that enables collaboration and offline access for applications, services, and devices with support for any data type, any data store, any transfer protocol, and any network topology.
Note
For more information see Microsoft Sync Framework Developer Center on MSDN.
Use a synchronization technology built into the data store itself. Some examples are:
- Azure storage geo-replication. By default in Azure data is automatically replicated in three datacenters (unless you turn it off) to protect against failures in one datacenter. This service can provide a read-only replica of the data.
- SQL Server database replication. Synchronization using the built-in features of SQL Server Replication Service can be achieved between on-premises installations of SQL Server and deployments of SQL Server in Azure Virtual Machines in the cloud, and between multiple deployments of SQL Server in Azure Virtual Machines.
- Implement a custom synchronization mechanism. For example, use a messaging technology to pass updates between deployments of the application, and include code in each application to apply these updates intelligently to the local data store and handle any update conflicts. Consider the following when building a custom mechanism:
- Ready-built synchronization services may have a minimum interval for synchronization, whereas a custom implementation could offer near-immediate synchronization.
- Ready-built synchronization services may not allow you to specify the order in which data stores are synchronized. A custom implementation may allow you to perform updates in a specific order between several data stores, or perform complex transformation or other operations on the data that are not supported in ready-built frameworks and services.
- When you design a custom implementation you should consider two separate aspects: how to communicate updates between separate locations, and how to apply updates to the data stores. Typically, you will need to create an application or component that runs in each location where updates will be applied to local data stores. This application or component will accept instructions that it uses to update the local data store, and then pass the updates to other data stores that contain copies of the data. Within the application or component you can implement logic to manage conflicting updates. However, by passing updates between data store immediately, rather than on a fixed schedule as is the case with most ready-built synchronization services, you minimize the chances of conflicts arising.
Related Patterns and Guidance
The following patterns and guidance may also be relevant to your scenario when distributing and synchronizing data across different locations:
- Caching Guidance. This guidance describes how caching can be used to improve the performance and scalability of a distributed application running in the cloud.
- Data Consistency Primer. This primer summarizes the issues surrounding consistency over distributed data, and provides guidance for handling these concerns.
- Data Partitioning Guidance. This guidance describes how to partition data in the cloud to improve scalability, reduce contention, and optimize performance.
More Information
- The guide Data Access for Highly-Scalable Solutions: Using SQL, NoSQL, and Polyglot Persistence on MSDN.
- Appendix A - Replicating, Distributing, and Synchronizing Data from the guide Building Hybrid Applications in the Cloud on Microsoft Azure on MSDN.
- The topic Data Movement Patterns on MSDN.
- The topic SQL Data Sync on MSDN.
- Deploying the Orders Application and Data in the Cloud from the guide Building Hybrid Applications in the Cloud on Microsoft Azure.
- The Microsoft Sync Framework Developer Center on MSDN.
Next Topic | Previous Topic | Home | Community | https://docs.microsoft.com/en-us/previous-versions/msp-n-p/dn589787(v=pandp.10) | 2019-06-16T05:22:17 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.microsoft.com |
Message-ID: <1651909891.36814.1560661008541.JavaMail.confluence@docs1.parasoft.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_36813_356821761.1560661008541" ------=_Part_36813_356821761.1560661008541 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Parasoft Virtualize simulates the behavior of systems that are still evo= lving, hard to access, or difficult to configure for development or testing= .
Test environment access constraints have become a significant barrier to= delivering quality software efficiently
Parasoft Virtualize=E2=80=99s service virtualization provides access to = the dependencies that are beyond your control, still evolving, or too compl= ex to configure in a virtual test lab. For example, this might include thir= d-party services (credit check, payment processing, etc.), mainframes and S= AP or other ERPs. With service virtualization, you don=E2=80=99t have to vi= rtualize an entire system when you need to access only a fraction of its av= ailable functionality. As you naturally exercise the application under test= , Parasoft captures interactions with dependencies and converts this behavi= or into flexible =E2=80=9Cvirtual assets=E2=80=9D with easily-configurable = response parameters (e.g., performance, test data and response logic). Soph= isticated virtual assets can be created and provisioned for role-based acce= ss.= =20
Prevents security vulnerabilities through penetration testing and execut= ion of complex authentication, encryption, and access control test scenario= s.= .=20
Automates the testing of multiple messaging and transport protocols=E2= =80=93 including HTTP, SOAP/REST, PoX, WCF, JMS, TIBCO, MQ, EJB, JDBC, RMI,= and so on.=20
During test execution, you can visualize and trace the intra-process eve= nts triggered by tests, facilitating rapid diagnosis of problems directly f= rom the test environment. You can also continuously validate whether critic= al events continue to satisfy functional expectations as the system evolves= .=20 | https://docs.parasoft.com/exportword?pageId=33858975 | 2019-06-16T04:56:48 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.parasoft.com |
Feature: #70056 - Added PHP library “Guzzle” for HTTP Requests within TYPO3¶
See Issue #70056
Description¶
The PHP library
Guzzle has been added via composer dependency to work as a feature rich solution for creating HTTP requests
based on the PSR-7 interfaces already used within TYPO3.
Guzzle auto-detects available underlying adapters available on the system, like cURL or stream wrappers and chooses the best solution for the system.
A TYPO3-specific PHP class called
TYPO3\CMS\Core\Http\RequestFactory has been added as a simplified wrapper to access
Guzzle clients.
All options available under
$TYPO3_CONF_VARS[HTTP] are automatically applied to the Guzzle clients when using the
RequestFactory class. The options are a subset to the available options on Guzzle ()
but can further be extended.
Existing
$TYPO3_CONF_VARS[HTTP] options have been removed and/or migrated to the new Guzzle-compliant options.
A full documentation for Guzzle can be found at.
Although Guzzle can handle Promises/A+ and asynchronous requests, it currently acts as a drop-in replacement for the
previous mixed options and implementations within
GeneralUtility::getUrl() and a PSR-7-based API for HTTP
requests.. | https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/8.1/Feature-70056-GuzzleForHttpRequests.html | 2019-06-16T05:52:38 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.typo3.org |
reCAPTCHA is an advanced form of CAPTCHA, which is a technology used to differentiate between robots and human users. CAPTCHA is an acronym for Completely Automated Public Turing test to tell Computers and Humans Apart.
Follow the steps below to obtain Google reCAPTCHA public and private keys:
First, You’ll need to sign in with a Google account if you’re not already logged in.
Navigate to Google reCAPTCHA website to get started.
Click on the Admin Console button located at the top right corner of the screen.
To get new site key and secret key, click on
+ blue button top right side.
Register a new site page where you need to provide some basic information to register your site.
Label — Type a suitable label which co-related your site name and for later remembrance.
reCAPTCHA Type — Select reCAPTCHA v2 and then choose I’m not a robot checkbox.
Domain — The website URL, where you will use these keys like.
Owners — You don’t need to change this, it’s set by default accordingly to logged in account. If you want the report on more email addresses then you can add here multiple email accounts.
Alerts — You can enable it to get email alert to owners, if there is any problem on your website like reCAPTCHA misconfiguration or increase suspicious traffic.
Once the form is complete, click on the Submit button.
A success message, along with the site and secret keys will be displayed once the form submitted successfully.
Copy the keys shown in the screen/fields to enter it in your WordPress admin panel so your site can access the Google reCAPTCHA APIs.
Login to the your WordPress Dashboard.
Click the Conj PowerPack menu.
From the sidebar on the left, select the 3rd Party API tab.
Locate the reCAPTCHA Public Key and reCAPTCHA Private Key text-fields and paste your newly generated keys key in.
Click the Save Changes button. | https://docs.conj.ws/3rd-party-api/creating-google-recaptcha-keys | 2019-06-16T05:28:52 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.conj.ws |
Add validation to an ASP.NET Core MVC app
In this section:
- Validation logic is added to the
Moviemodel.
- You ensure that the validation rules are enforced any time a user creates or edits a movie.
Keeping things DRY
One of the design tenets of MVC is DRY ("Don't Repeat Yourself"). ASP.NET Core MVC encourages you to specify functionality or behavior only once, and then have it be reflected everywhere in an app. This reduces the amount of code you need to write and makes the code you do write less error prone, easier to test, and easier to maintain.
The validation support provided by MVC and Entity Framework Core Code First is a good example of the DRY principle in action. You can declaratively specify validation rules in one place (in the model class) and the rules are enforced everywhere in the app.
Add validation rules to the movie model
Open the Movie.cs file. The DataAnnotations namespace provides a set of built-in validation attributes that are applied declaratively to a class or property. DataAnnotations also contains formatting attributes like
DataType that help with formatting and don't provide any validation.
Update the
Movie class to take advantage of the built-in
Required,
StringLength,
RegularExpression, and
Range validation attributes.
public class Movie { public int Id { get; set; } [StringLength(60, MinimumLength = 3)] [Required] public string Title { get; set; } [Display(Name = "Release Date")] [DataType(DataType.Date)] public DateTime ReleaseDate { get; set; } [Range(1, 100)] [DataType(DataType.Currency)] [Column(TypeName = "decimal(18, 2)")] public decimal Price { get; set; } [RegularExpression(@"^[A-Z]+[a-zA-Z""'\s-]*$")] [Required] [StringLength(30)] public string Genre { get; set; } [RegularExpression(@"^[A-Z]+[a-zA-Z0-9""'\s-]*$")] [StringLength(5)] [Required] public string Rating { get; set; } }
The validation attributes specify behavior that you want to enforce on the model properties they're applied to:
The
Requiredand
MinimumLengthattributes indicate that a property must have a value; but nothing prevents a user from entering white space to satisfy this validation.
The
RegularExpressionattribute is used to limit what characters can be input. In the preceding code, "Genre":
- Must only use letters.
- The first letter is required to be uppercase. White space, numbers, and special characters are not allowed.
The
RegularExpression"Rating":
- Requires that the first character be an uppercase letter.
- Allows special characters and numbers in subsequent spaces. "PG-13" is valid for a rating, but fails for a "Genre".
The
Rangeattribute constrains a value to within a specified range.
The
StringLengthattribute lets you set the maximum length of a string property, and optionally its minimum length.
Value types (such as
decimal,
int,
float,
DateTime) are inherently required and don't need the
[Required]attribute.
Having validation rules automatically enforced by ASP.NET Core helps make your app more robust. It also ensures that you can't forget to validate something and inadvertently let bad data into the database.
Validation Error UI
Run the app and navigate to the Movies controller.
Tap the Create New link to add a new movie. Fill out the form with some invalid values. As soon as jQuery client side validation detects the error, it displays an error message.
Note
You may not be able to enter decimal commas in decimal fields. To support jQuery validation for non-English locales that use a comma (",") for a decimal point, and non US-English date formats, you must take steps to globalize your app. This GitHub issue 4076 for instructions on adding decimal comma.
Notice how the form has automatically rendered an appropriate validation error message in each field containing an invalid value. The errors are enforced both client-side (using JavaScript and jQuery) and server-side (in case a user has JavaScript disabled).
A significant benefit is that you didn't need to change a single line of code in the
MoviesController class or in the Create.cshtml view in order to enable this validation UI. The controller and views you created earlier in this tutorial automatically picked up the validation rules that you specified by using validation attributes on the properties of the
Movie model class. Test validation using the
Edit action method, and the same validation is applied.
The form data isn't sent to the server until there are no client side validation errors. You can verify this by putting a break point in the
HTTP Post method, by using the Fiddler tool , or the F12 Developer tools.
How validation works
You might wonder how the validation UI was generated without any updates to the code in the controller or views. The following code shows the two
Create methods.
// GET: Movies/Create public IActionResult Create() { return View(); } // POST: Movies/Create [HttpPost] [ValidateAntiForgeryToken] public async Task<IActionResult> Create( [Bind("ID,Title,ReleaseDate,Genre,Price, Rating")] Movie movie) { if (ModelState.IsValid) { _context.Add(movie); await _context.SaveChangesAsync(); return RedirectToAction("Index"); } return View(movie); }
The first (HTTP GET)
Create action method displays the initial Create form. The second (
[HttpPost]) version handles the form post. The second
Create method (The
[HttpPost] version) calls
ModelState.IsValid to check whether the movie has any validation errors. Calling this method evaluates any validation attributes that have been applied to the object. If the object has validation errors, the
Create method re-displays the form. If there are no errors, the method saves the new movie in the database. In our movie example, the form isn't posted to the server when there are validation errors detected on the client side; the second
Create method is never called when there are client side validation errors. If you disable JavaScript in your browser, client validation is disabled and you can test the HTTP POST
Create method
ModelState.IsValid detecting any validation errors.
You can set a break point in the
[HttpPost] Create method and verify the method is never called, client side validation won't submit the form data when validation errors are detected. If you disable JavaScript in your browser, then submit the form with errors, the break point will be hit. You still get full validation without JavaScript.
The following image shows how to disable JavaScript in the FireFox browser.
The following image shows how to disable JavaScript in the Chrome browser.
After you disable JavaScript, post invalid data and step through the debugger.
The portion of the Create.cshtml view template is shown in the following markup:
<h4>Movie</h4> <hr /> <div class="row"> <div class="col-md-4"> <form asp- <div asp-</div> <div class="form-group"> <label asp-</label> <input asp- <span asp-</span> </div> @*Markup removed for brevity.*@
The preceding markup is used by the action methods to display the initial form and to redisplay it in the event of an error.
The Input Tag Helper uses the DataAnnotations attributes and produces HTML attributes needed for jQuery Validation on the client side. The Validation Tag Helper displays validation errors. See Validation for more information.
What's really nice about this approach is that neither the controller nor the
Create view template knows anything about the actual validation rules being enforced or about the specific error messages displayed. The validation rules and the error strings are specified only in the
Movie class. These same validation rules are automatically applied to the
Edit view and any other views templates you might create that edit your model.
When you need to change validation logic, you can do so in exactly one place by adding validation attributes to the model (in this example, the
Movie class). You won't have to worry about different parts of the application being inconsistent with how the rules are enforced — all validation logic will be defined in one place and used everywhere. This keeps the code very clean, and makes it easy to maintain and evolve. And it means that you'll be fully honoring the DRY principle.
Using DataType Attributes
Open the Movie.cs file and examine the
Movie class. The
System.ComponentModel.DataAnnotations namespace provides formatting attributes in addition to the built-in set of validation attributes. We've already applied a
DataType enumeration value to the release date and to the price fields. The following code shows the
ReleaseDate and
Price properties with the appropriate
DataType attribute.
[Display(Name = "Release Date")] [DataType(DataType.Date)] public DateTime ReleaseDate { get; set; } [Range(1, 100)] [DataType(DataType.Currency)] public decimal Price { get; set; }
The
DataType attributes only provide hints for the view engine to format the data (and supplies elements/attributes such as
<a> for URL's and
<a href="mailto:EmailAddress.com"> for email. You can use the
RegularExpression attribute to validate the format of the data. The
DataType attribute is used to specify a data type that's more specific than the database intrinsic type, they're not validation attributes. In this case we only want to keep track of the date, not emit HTML 5
data- (pronounced data dash) attributes that HTML 5 browsers can understand. The
DataType attributes do not provide any validation.
DataType.Date doesn't specify the format of the date that ReleaseDate { get; set; }
The
ApplyFormatInEditMode setting specifies that the formatting should also be applied when the value is displayed in a text box for editing. (You might not want that for some fields — for example, for currency values, you probably don't want the currency symbol in the text box for editing.)
You can use the
DisplayFormat attribute by itself, but it's generally a good idea to use the
DataType attribute.attribute can enable MVC to choose the right field template to render the data (the
DisplayFormatif used by itself uses the string template).
Note
jQuery validation doesn; } [StringLength(60, MinimumLength = 3)] public string Title { get; set; } [Display(Name = "Release Date"), DataType(DataType.Date)] public DateTime ReleaseDate { get; set; } [RegularExpression(@"^[A-Z]+[a-zA-Z""'\s-]*$"), Required, StringLength(30)] public string Genre { get; set; } [Range(1, 100), DataType(DataType.Currency)] [Column(TypeName = "decimal(18, 2)")] public decimal Price { get; set; } [RegularExpression(@"^[A-Z]+[a-zA-Z0-9""'\s-]*$"), StringLength(5)] public string Rating { get; set; } }
In the next part of the series, we review the app and make some improvements to the automatically generated
Details and
Delete methods.
Additional resources
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/aspnet/core/tutorials/first-mvc-app/validation?view=aspnetcore-2.2 | 2019-06-16T05:23:38 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['validation/_static/val.png?view=aspnetcore-2.2',
'Movie view form with multiple jQuery client side validation errors'],
dtype=object)
array(['validation/_static/ff.png?view=aspnetcore-2.2',
'Firefox: On the Content tab of Options, uncheck the Enable Javascript check box.'],
dtype=object)
array(['validation/_static/chrome.png?view=aspnetcore-2.2',
'Google Chrome: In the Javascript section of Content settings, select Do not allow any site to run JavaScript.'],
dtype=object)
array(['validation/_static/ms.png?view=aspnetcore-2.2',
'While debugging on a post of invalid data, Intellisense on ModelState.IsValid shows the value is false.'],
dtype=object) ] | docs.microsoft.com |
IWbemClassObject::SpawnInstance method
Use the IWbemClassObject::SpawnInstance method to create a new instance of a class. The current object must be a class definition obtained from Windows Management using IWbemServices::GetObject, IWbemServices::CreateClassEnum, or IWbemServices::CreateClassEnumAsync Then, use this class definition to create new instances.
A call to IWbemServices::PutInstance is required to actually write the instance to Windows Management. If you intend to discard the object before calling IWbemServices::PutInstance, simply make a call to IWbemClassObject::Release.
Note that spawning an instance from an instance is supported but the returned instance will be empty.
Syntax
HRESULT SpawnInstance( long lFlags, IWbemClassObject **ppNewInstance );
Parameters
lFlags
Reserved. This parameter must be 0.
ppNewInstance
Cannot be NULL. It receives a new instance of the class. The caller must invoke IWbemClassObject::Release when the pointer is no longer required. On error, a new object is not returned and the pointer is left unmodified.
Return Value
This method returns an HRESULT indicating the status of the method call. The following list lists the value contained within an HRESULT. For general HRESULT values, see System Error Codes.
Requirements
See Also
IWbemServices::PutInstance | https://docs.microsoft.com/en-us/windows/desktop/api/WbemCli/nf-wbemcli-iwbemclassobject-spawninstance | 2019-06-16T04:48:51 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.microsoft.com |
SetThreadDpiHostingBehavior function
Sets the thread's DPI_HOSTING_BEHAVIOR. This behavior allows windows created in the thread to host child windows with a different DPI_AWARENESS_CONTEXT.
Syntax
DPI_HOSTING_BEHAVIOR SetThreadDpiHostingBehavior( DPI_HOSTING_BEHAVIOR value );
Parameters
value
The new DPI_HOSTING_BEHAVIOR value for the current thread.
Return Value
The previous DPI_HOSTING_BEHAVIOR for the thread. If the hosting behavior passed in is invalid, the thread will not be updated and the return value will be DPI_HOSTING_BEHAVIOR_INVALID. You can use this value to restore the old DPI_HOSTING_BEHAVIOR after overriding it with a predefined value.
Remarks
DPI_HOSTING_BEHAVIOR enables a mixed content hosting behavior, which allows parent windows created in the thread to host child windows with a different DPI_AWARENESS_CONTEXT value. This property only effects new windows created within this thread while the mixed hosting behavior is active. A parent window with this hosting behavior is able to host child windows with different DPI_AWARENESS_CONTEXT values, regardless of whether the child windows have mixed hosting behavior enabled.
This hosting behavior does not allow for windows with per-monitor DPI_AWARENESS_CONTEXT values to be hosted until windows with DPI_AWARENESS_CONTEXT values of system or unaware.
To avoid unexpected outcomes, a thread's DPI_HOSTING_BEHAVIOR should be changed to support mixed hosting behaviors only when creating a new window which needs to support those behaviors. Once that window is created, the hosting behavior should be switched back to its default value.
This API is used to change the thread's DPI_HOSTING_BEHAVIOR from its default value. This is only necessary if your app needs to host child windows from plugins and third-party components that do not support per-monitor-aware context. This is most likely to occur if you are updating complex applications to support per-monitor DPI_AWARENESS_CONTEXT behaviors.
Enabling mixed hosting behavior will not automatically adjust the thread's DPI_AWARENESS_CONTEXT to be compatible with legacy content. The thread's awareness context must still be manually changed before new windows are created to host such content.
Requirements
See Also
GetThreadDpiHostingBehavior
GetWindowDpiHostingBehavior | https://docs.microsoft.com/fr-fr/windows/desktop/api/winuser/nf-winuser-setthreaddpihostingbehavior | 2019-06-16T05:31:33 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.microsoft.com |
Rest PKI - Setup on Windows Server
The minimum requirements for installing a on premises instance of Rest PKI on Windows Server are:
- OS: Windows Server 2008 R2 or later
- Database:
- SQL Server 2008 R2 or later or
- PostgreSQL 9.3 or later
To start the installation procedure, you'll need:
- Rest PKI binaries package: restpki-1.18.3.zip
- Binary license for the Lacuna PKI SDK (file LacunaPkiLicense.txt)
Note
If you don't have a license yet, request a trial license.
Once you have both files, follow one of the articles below: | http://docs.lacunasoftware.com/en-us/articles/rest-pki/on-premises/windows-setup/index.html | 2019-06-16T05:37:06 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.lacunasoftware.com |
Timeline …) …
Note: See TracTimeline for information about the timeline view. | http://docs.openmoko.org/trac/timeline?from=2007-12-21&daysback=30&authors= | 2019-06-15T23:01:33 | CC-MAIN-2019-26 | 1560627997501.61 | [] | docs.openmoko.org |
Random Number Node
The Random Number Node generates a random integer within a range of your choosing.
Configuration
The node takes three parameters, all of which are required.
Minimum Value and Maximum Value are each a number or a string template resolving to a numerical value on your payload. The randomly generated number is inclusive of the defined minimum and maximum values; for example, if
0 is set as the minimum value, then
0 is a possible output from the node.
The Result Path is a payload path stipulating where on the payload the resulting number should be stored.
Node Failure Cases
If the node fails to generate a random number, the result stored on the payload at the result path will be
null. There are a few of cases where this could occur, most likely when using a string template to reference a value within the payload:
- The minimum or maximum value is not a number. The node attempts to type the value to a number (for example, a string of
"1"will type to the number
1) but if that fails, the number generator will return
null.
- The minimum value is greater than the maximum value. It is, of course, impossible to find a random number in that case.
- There are no integers between the minimum and maximum values. For example, if a minimum value of
1.3and a maximum value of
1.9are passed, the node will return
nullbecause there is no possible random integer between the two values. | http://docs.prerelease.losant.com/workflows/logic/random-number/ | 2019-06-15T23:50:01 | CC-MAIN-2019-26 | 1560627997501.61 | [array(['/images/workflows/logic/random-number.png',
'Random Number Node Random Number Node'], dtype=object)] | docs.prerelease.losant.com |
Amazon DocumentDB: How It Works
Amazon DocumentDB (with MongoDB compatibility) is a fully managed, MongoDB-compatible database service. With Amazon DocumentDB, you can run the same application code and use the same drivers and tools that you use with MongoDB. Amazon DocumentDB is compatible with MongoDB 3.6.
Topics
When you use Amazon DocumentDB, you begin by creating a cluster. A cluster consists of zero or more database instances and a cluster volume that manages the data for those instances. An Amazon DocumentDB cluster volume is a virtual database storage volume that spans multiple Availability Zones. Each Availability Zone has a copy of the cluster data.
An Amazon DocumentDB cluster consists of two components:
Cluster volume—Uses a cloud-native storage service to replicate data six ways across three Availability Zones, providing highly durable and available storage. An Amazon DocumentDB cluster has exactly one cluster volume, which can store up to 64 TB of data.
Instances—Provide the processing power for the database, writing data to, and reading data from, the cluster storage volume. An Amazon DocumentDB cluster can have 0–16 instances.
Instances serve one of two roles:
Primary instance—Supports read and write operations, and performs all the data modifications to the cluster volume. Each Amazon DocumentDB cluster has one primary instance.
Replica instance—Supports only read operations. An Amazon DocumentDB cluster can have up to 15 replicas in addition to the primary instance. Having multiple replicas enables you to distribute read workloads. In addition, by placing replicas in separate Availability Zones, you also increase your cluster availability.
The following diagram illustrates the relationship between the cluster volume, the primary instance, and replicas in an Amazon DocumentDB cluster:
Cluster instances do not need to be of the same instance class, and they can be provisioned and terminated as desired. This architecture lets you scale your cluster’s compute capacity independently of its storage.
When your application writes data to the primary instance, the primary executes a
durable write to the cluster volume. It then replicates the state of
that write (not the data) to each active replica. Amazon DocumentDB replicas do not
participate in processing writes, and thus Amazon DocumentDB replicas are advantageous
for
read scaling. Reads from Amazon DocumentDB replicas are eventually consistent with
minimal replica lag—usually less than 100 milliseconds after the primary
instance writes the data. Reads from the replicas are guaranteed to be read in the
order in which they were written to the primary. Replica lag varies
depending on the rate of data change, and periods of high write activity might increase
the replica lag. For more information, see the
ReplicationLag metrics at Viewing CloudWatch Data.
Amazon DocumentDB Endpoints
Amazon DocumentDB provides multiple connection options to serve a wide range of use cases. To connect to an instance in an Amazon DocumentDB cluster, you specify the instance's endpoint. An endpoint is a host address and a port number, separated by a colon. The following endpoints are available from an Amazon DocumentDB cluster.
Cluster Endpoint
The cluster endpoint connects to your cluster’s current primary instance. The cluster endpoint can be used for read and write operations. An Amazon DocumentDB cluster has exactly one cluster endpoint.
The cluster endpoint provides failover support for read and write connections to the cluster. If your cluster’s current primary instance fails, and your cluster has at least one active read replica, the cluster endpoint automatically redirects connection requests to a new primary instance.
The following is an example Amazon DocumentDB cluster endpoint:
sample-cluster.cluster-123456789012.us-east-1.docdb.amazonaws.com:27017
The following is an example connection string using this cluster endpoint:
mongodb://
username:
password@sample-cluster.cluster-123456789012.us-east-1.docdb.amazonaws.com:27017
For information about finding a cluster's endpoints, see Finding a Cluster's Endpoints.
Reader Endpoint
The reader endpoint load balances read-only connections across all available replicas in your cluster. Attempting to perform a write operation over a connection to the reader endpoint results in an error. An Amazon DocumentDB cluster has exactly one reader endpoint.
If the cluster contains only one (primary) instance, the reader endpoint connects to the primary instance. When you add a replica instance to your Amazon DocumentDB cluster, the reader endpoint opens read-only connections to the new replica after it is active.
The following is an example reader endpoint for an Amazon DocumentDB cluster:
sample-cluster.cluster-ro-123456789012.us-east-1.docdb.amazonaws.com:27017
The following is an example connection string using a reader endpoint:
mongodb://
username:
password@sample-cluster.cluster-ro-123456789012.us-east-1.docdb.amazonaws.com:27017
The reader endpoint load balances read-only connections, not read requests. If some reader endpoint connections are more heavily used than others, your read requests might not be equally balanced among cluster instances.
For information about finding a cluster's endpoints, see Finding a Cluster's Endpoints.
Instance Endpoint
An instance endpoint connects to a specific instance within your cluster. The instance endpoint for the current primary instance can be used for read and write operations. However, attempting to perform write operations to an instance endpoint for a read replica results in an error. An Amazon DocumentDB cluster has one instance endpoint per active instance.
An instance endpoint provides direct control over connections to a specific instance for scenarios in which the cluster endpoint or reader endpoint might not be appropriate. An example use case is provisioning for a periodic read-only analytics workload. You can provision a larger-than-normal replica instance, connect directly to the new larger instance with its instance endpoint, run the analytics queries, and then terminate the instance. Using the instance endpoint keeps the analytics traffic from impacting other cluster instances.
The following is an example instance endpoint for a single instance in an Amazon DocumentDB cluster:
sample-instance.123456789012.us-east-1.docdb.amazonaws.com:27017
The following is an example connection string using this instance endpoint:
mongodb://
username:
password@sample-instance.123456789012.us-east-1.docdb.amazonaws.com:27017
Note
An instance’s role as primary or replica can change due to a failover event. Your applications should never assume that a particular instance endpoint is the primary. For more advanced control instance failover priority, see Understanding Amazon DocumentDB Cluster Fault Tolerance.
For information about finding a cluster's endpoints, see Finding an Instance's Endpoint.
Replica Set Mode
You can connect to your Amazon DocumentDB cluster endpoint in replica set mode by
specifying the replica set name
rs0. Connecting in replica
set mode provides the ability to specify the Read Concern, Write Concern, and Read
Preference options. For more information, see Read Consistency.
The following is an example connection string connecting in replica set mode:
mongodb://
username:
password@sample-cluster.cluster-123456789012.us-east-1.docdb.amazonaws.com:27017/?
replicaSet=rs0
When you connect in replica set mode, your Amazon DocumentDB cluster appears to your drivers and clients as a replica set. Instances added and removed from your Amazon DocumentDB cluster are reflected automatically in the replica set configuration.
Each Amazon DocumentDB cluster consists of a single replica set with the default name
rs0. The replica set name cannot be modified.
Connecting to the cluster endpoint in replica set mode is the recommended method for general use.
Note
All instances in an Amazon DocumentDB cluster listen on the same TCP port for connections.
TLS Support
For more details on connecting to Amazon DocumentDB using Transport Layer Security (TLS), see Encrypting Connections Using TLS.
Amazon DocumentDB Storage
Amazon DocumentDB stores its data in a cluster volume, which is a single, virtual volume that uses solid state drives (SSDs). A cluster volume consists of copies of your data, which is replicated automatically across multiple Availability Zones in a single AWS Region. This replication helps ensure that your data is highly durable, with less possibility of data loss. It also helps ensure that your cluster is more available during a failover because copies of your data already exist in other Availability Zones. These copies can continue to serve data requests to the instances in your Amazon DocumentDB cluster.
Amazon DocumentDB automatically increases the size of a cluster volume as the amount of data increases. An Amazon DocumentDB cluster volume can grow to a maximum size of 64 TB. Although an Amazon DocumentDB cluster volume can grow up to 64 TB, you are charged only for the space that you use in an Amazon DocumentDB cluster volume.
Amazon DocumentDB Replication
In an Amazon DocumentDB cluster, each replica instance exposes an independent endpoint. These replica endpoints provide read-only access to the data in the cluster volume. They enable you to scale the read workload for your data over multiple replicated instances. They also help improve the performance of data reads and increase the availability of the data in your Amazon DocumentDB cluster. Amazon DocumentDB replicas are also failover targets and are quickly promoted if the primary instance for your Amazon DocumentDB cluster fails.
Amazon DocumentDB Reliability
Amazon DocumentDB is designed to be reliable, durable, and fault tolerant. (To improve availability, you should configure your Amazon DocumentDB cluster so that it has multiple replica instances in different Availability Zones.) Amazon DocumentDB includes several automatic features that make it a reliable database solution.
Storage Auto-Repair
Amazon DocumentDB maintains multiple copies of your data in three Availability Zones, greatly reducing the chance of losing data due to a storage failure. Amazon DocumentDB automatically detects failures in the cluster volume. When a segment of a cluster volume fails, Amazon DocumentDB immediately repairs the segment. It uses the data from the other volumes that make up the cluster volume to help ensure that the data in the repaired segment is current. As a result, Amazon DocumentDB avoids data loss and reduces the need to perform a point-in-time restore to recover from an instance failure.
Survivable Cache Warming
Amazon DocumentDB manages its page cache in a separate process from the database so that the page cache can survive independently of the database. In the unlikely event of a database failure, the page cache remains in memory. This ensures that the buffer pool is warmed with the most current state when the database restarts.
Crash Recovery
Amazon DocumentDB is designed to recover from a crash almost instantaneously, and to continue serving your application data. Amazon DocumentDB performs crash recovery asynchronously on parallel threads so that your database is open and available almost immediately after a crash.
Durability, Consistency, and Isolation
Amazon DocumentDB uses a cloud-native shared storage service that replicates data six times across three Availability Zones to provide high levels of durability. Amazon DocumentDB does not rely on replicating data to multiple instances to achieve durability. Your cluster’s data is durable whether it contains a single instance or 15 instances.
Write Durability
Amazon DocumentDB uses a unique, distributed, fault-tolerant, self-healing storage
system. This system replicates six copies (V=6) of your data across
three AWS Availability Zones to provide high availability and durability. When writing
data, Amazon DocumentDB ensures that all writes are durably recorded on
a majority of nodes before acknowledging the write to the client. If you are running
a three-node MongoDB replica set, using a write concern of
{w:3, j:true} would yield the best possible configuration when comparing with Amazon DocumentDB.
Writes to an Amazon DocumentDB cluster must be processed by the cluster’s primary instance. Attempting to write to a replica results in an error. An acknowledged write from an Amazon DocumentDB primary instance is durable, and can't be rolled back. Amazon DocumentDB is highly durable by default and doesn't support a non-durable write option. You can't modify the durability level (that is, write concern).
Because storage and compute are separated in the Amazon DocumentDB architecture, a cluster with a single instance is highly durable. Durability is handled at the storage layer. As a result, an Amazon DocumentDB cluster with a single instance and one with three instances achieve the same level of durability. You can configure your cluster to your specific use case while still providing high durability for your data.
Writes to an Amazon DocumentDB cluster are atomic within a single document.
Writes to the primary Amazon DocumentDB instance are guaranteed not to block indefinitely.
Read Isolation
Reads from an Amazon DocumentDB instance only return data that is durable before the query begins. Reads never return data modified after the query begins execution nor are dirty reads possible under any circumstances.
Read Consistency
Data read from an Amazon DocumentDB cluster is durable and will not be rolled back. You can modify the read consistency for Amazon DocumentDB reads by specifying the read preference for the request or connection. Amazon DocumentDB does not support a non-durable read option.
Reads from an Amazon DocumentDB cluster’s primary instance are strongly consistent under normal operating conditions and have read-after-write consistency. If a failover event occurs between the write and subsequent read, the system can briefly return a read that is not strongly consistent. All reads from a read replica are eventually consistent and return the data in the same order, and often with less than 100 ms replica lag.
Amazon DocumentDB Read Preferences
Amazon DocumentDB supports setting a read preference option only when reading data from the cluster endpoint in replica set mode. Setting a read preference option affects how your MongoDB client or driver routes read requests to instances in your Amazon DocumentDB cluster. You can set read preference options for a specific query, or as a general option in your MongoDB driver. (Consult your client or driver’s documentation for instructions on how to set a read preference option.)
If your client or driver is not connecting to an Amazon DocumentDB cluster endpoint in replica set mode, the result of specifying a read preference is undefined.
Amazon DocumentDB does not support setting "tag sets" as a read preference.
Supported Read Preference Options
primary—Specifying a "primary" read preference helps ensure that all reads are routed to the cluster’s primary instance. If the primary instance is unavailable, the read operation fails. A "primary" read preference yields read-after-write consistency. A "primary" read preference is appropriate for use cases that prioritize read-after-write consistency over high availability and read scaling.
The following example specifies a "primary" read preference:
db.example.find().readPref('primary')
primaryPreferred—Specifying a "primaryPreferred" read preference routes reads to the primary instance under normal operation. If there is a primary failover, the client routes requests to a replica. A "primaryPreferred" read preference yields read-after-write consistency during normal operation, and eventually consistent reads during a failover event. A "primary" read preference is appropriate for use cases that prioritize read-after-write consistency over read scaling, but still require high availability.
The following example specifies a "primaryPreferred" read preference:
db.example.find().readPref('primaryPreferred')
secondary—Specifying a "secondary" read preference ensures that reads are only routed to a replica, never the primary instance. If there are no replica instances in a cluster, the read request fails. A "secondary" read preference yields eventually consistent reads. A "secondary" read preference is appropriate for use cases that prioritize primary instance write throughput over high availability and read-after-write consistency.
The following example specifies a "secondary" read preference:
db.example.find().readPref('secondary')
secondaryPreferred—Specifying a "
secondaryPreferred" read preference ensures that reads are routed to a read replica when one or more replicas are active. If there are no active replica instances in a cluster, the read request is routed to the primary instance. A "
secondaryPreferred" read preference yields eventually consistent reads when the read is serviced by a read replica. It yields read-after-write consistency when the read is serviced by the primary instance (barring failover events). A "
secondaryPreferred" read preference is appropriate for use cases that prioritize read scaling and high availability over read-after-write consistency.
The following example specifies a "
secondaryPreferred" read preference:
db.example.find().readPref('secondaryPreferred')
nearest—Specifying a "nearest" read preference routes reads based solely on the measured latency between the client and all instances in the Amazon DocumentDB cluster. A "nearest" read preference yields eventually consistent reads when the read is serviced by a read replica. It yields read-after-write consistency when the read is serviced by the primary instance (barring failover events). A "nearest" read preference is appropriate for use cases that prioritize achieving the lowest possible read latency and high availability over read-after-write consistency and read scaling.
The following example specifies a "nearest" read preference:
db.example.find().readPref('nearest')
High Availability
Amazon DocumentDB supports highly available cluster configurations by using replicas as failover targets for the primary instance. If the primary instance fails, an Amazon DocumentDB replica is promoted as the new primary, with a brief interruption during which read and write requests made to the primary instance fail with an exception.
If your Amazon DocumentDB cluster doesn't include any replicas, the primary instance is re-created during a failure. However, promoting an Amazon DocumentDB replica is much faster than re-creating the primary instance. So we recommend that you create one or more Amazon DocumentDB replicas as failover targets.
Replicas that are intended for use as failover targets should be of the same instance class as the primary instance. They should be provisioned in different Availability Zones from the primary. You can control which replicas are preferred as failover targets. For best practices on configuring Amazon DocumentDB for high availability, see Understanding Amazon DocumentDB Cluster Fault Tolerance.
Scaling Reads
Amazon DocumentDB replicas are ideal for read scaling. They are fully dedicated to read operations on your cluster volume, that is, replicas do not process writes. Data replication happens within the cluster volume and not between instances. So each replica’s resources are dedicated to processing your queries, not replicating and writing data.
If your application needs more read capacity, you can add a replica to your cluster quickly (usually in less than ten minutes). If your read capacity requirements diminish, you can remove unneeded replicas. With Amazon DocumentDB replicas, you pay only for the read capacity that you need.
Amazon DocumentDB supports client-side read scaling through the use of Read Preference options. For more information, see Amazon DocumentDB Read Preferences. | https://docs.aws.amazon.com/documentdb/latest/developerguide/how-it-works.html | 2019-06-15T23:25:08 | CC-MAIN-2019-26 | 1560627997501.61 | [array(['images/how-it-works-01c.png',
'Cluster containing primary instance in Availability Zone a, writing to cluster volume for replicas in zones b and c.'],
dtype=object) ] | docs.aws.amazon.com |
SVL_QLOG
The SVL_QLOG view contains a log of all queries run against the database.
Amazon Redshift creates the SVL_QLOG view as a readable subset of information from the STL_QUERY table. Use this table to find the query ID for a recently run query or to see how long it took a query to complete.
SVL_QLOG is visible to all users. Superusers can see all rows; regular users can see only their own data. For more information, see Visibility of Data in System Tables and Views.
Table Columns
Sample Queries
The following example returns the query ID, execution time, and truncated query
text for the five most recent database queries executed by the user with
userid = 100.
select query, pid, elapsed, substring from svl_qlog where userid = 100 order by starttime desc limit 5; query | pid | elapsed | substring --------+-------+----------+----------------------------------------------- 187752 | 18921 | 18465685 | select query, elapsed, substring from svl_... 204168 | 5117 | 59603 | insert into testtable values (100); 187561 | 17046 | 1003052 | select * from pg_table_def where tablename... 187549 | 17046 | 1108584 | select * from STV_WLM_SERVICE_CLASS_CONFIG 187468 | 17046 | 5670661 | select * from pg_table_def where schemaname... (5 rows)
The following example returns the SQL script name (LABEL column) and elapsed time
for a query that was cancelled (
aborted=1):
select query, elapsed, label from svl_qlog where aborted=1; query | elapsed | label -------+---------+-------------------------------- 16 | 6935292 | alltickittablesjoin.sql (1 row) | https://docs.aws.amazon.com/redshift/latest/dg/r_SVL_QLOG.html | 2019-06-15T23:04:08 | CC-MAIN-2019-26 | 1560627997501.61 | [] | docs.aws.amazon.com |
Canvas Canvas Canvas Canvas Class
Definition
public : class Canvas : Panel
struct winrt::Windows::UI::Xaml::Controls::Canvas : Panel
public class Canvas : Panel
Public Class Canvas Inherits Panel
<Canvas ...> oneOrMoreUIElements </Canvas> -or- <Canvas .../>
- Inheritance
- CanvasCanvasCanvasCanvas
- specifying x and y coordinates. These coordinates are in pixels. The x and y coordinates are often specified by using the Canvas.Left and Canvas.Top attached properties. Canvas.Left specifies the object's distance from the left side of the containing Canvas (the x-coordinate), and Canvas.Top specifies the object's distance from the top of the containing Canvas (the y-coordinate)..
XAML attached properties
Canvas: | https://docs.microsoft.com/en-us/uwp/api/windows.ui.xaml.controls.canvas | 2019-06-15T23:23:23 | CC-MAIN-2019-26 | 1560627997501.61 | [array(['windows.ui.xaml.controls/images/controls/canvas.png',
'Canvas layout panel'], dtype=object) ] | docs.microsoft.com |
Version 9.1.00 enhancements
This topic describes new or updated features relevant to BMC Service Level Management.
SLM data archiving
The archiving feature included in this release of BMC Remedy Service Level Management helps you to reduce the size of your production data sets and improves overall system performance. For example, searches run more quickly because the searches look at only the production data, not the archived data. By default, the archiving process is enabled and is run every 24 hours. However, you can configure the frequency of the archiving process or disable it.
For more information, see Service Level Management data archiving.
Support for hierarchical groups. For more information, see Hierarchical groups. | https://docs.bmc.com/docs/slm91/version-9-1-00-enhancements-613616278.html | 2019-06-15T23:52:25 | CC-MAIN-2019-26 | 1560627997501.61 | [] | docs.bmc.com |
Deploying Gateway Server in the Multiple Server, Single Management Group Scenario.
Note
To monitor computers that lie outside the management servers' trust boundary Agent and Agentless Monitoring and Operations Manager 2007 Supported Configurations ().
Procedure Overview
Request certificates for any computer in the agent, gateway server, management server chain.
Import those certificates into the target computers by using the Operations Manager 2007 MOMCertImport.exe tool.
Note
For information about obtaining and importing a certificate by using an enterprise certification authority, see. For information about using a stand-alone certification authority, see..
Note
The hosts file is located in the \Windows\system32\drivers\etc directory, and it contains directions for configuration. installation media \SupportTools directory.
Copy the Microsoft.EnterpriseManagement.GatewayApprovalTool.exe from the installation media to the Operations Manager 2007 installation directory, which is typically c:\Program Files\System Center Operations Manager 2007.
Registering the Gateway with the Management Group window, and navigate to the \Program Files\System Center Operations Manager 2007.
Tip
An installation will fail when starting Windows Installer (for example, installing a gateway server by double-clicking MOMGateway.msi) on a computer running Windows Server 2008 if the local security policy User Account Control: Run all administrators in Admin Approval Mode is enabled (which is the default setting on Windows Server 2008). Installation page, click Finish.
Importing Certificates with the MOMCertImport.exe Tool> :
$failoverMS = Get-ManagementServer | where {$_.Name –eq ’computername.com’ }
For help with the Set-ManagementServer command, type the following in the Command Shell window:
Get-help Set-ManagementServer -full. | https://docs.microsoft.com/en-us/previous-versions/system-center/operations-manager-2007-r2/bb432149(v=technet.10) | 2019-06-15T23:26:32 | CC-MAIN-2019-26 | 1560627997501.61 | [] | docs.microsoft.com |
.
Determines whether the animation values will be applied on the animated object after the animation finishes.
If true the animation will be played backwards.
Specifies how many times the animation should be played. Default is 1. iOS animations support fractional iterations, i.e. 1.5. To repeat an animation infinitely, use Number.POSITIVE_INFINITY
Return animation keyframes.
The animation name.
Defines animation options for the View.animate method. | https://docs.nativescript.org/api-reference/classes/_ui_animation_keyframe_animation_.keyframeanimationinfo | 2019-06-15T23:14:30 | CC-MAIN-2019-26 | 1560627997501.61 | [] | docs.nativescript.org |
The Kubernetes-native platform (v2).
The Package manager for Kubernetes.
The Kubernetes-native Service Broker.
There are two classes of Workflow users: normal users and administrators.
The first user created on a Workflow installation is automatically an administrator.
Use
deis register with the Controller URL (supplied by your Deis administrator)
to create a new account. After successful registration you will be logged in as the new user.
$ deis register username: myuser password: password (confirm): email: [email protected] Registered myuser Logged in as myuser
Important
The first user to register with Deis Workflow automatically becomes an administrator. Additional users who register will be ordinary users.
If you already have an account, use
deis login to authenticate against the Deis Workflow API.
$ deis login username: deis password: Logged in as deis
Logout of an existing controller session using
deis logout.
$ deis logout Logged out as deis
You can verify your client configuration by running
deis whoami.
$ deis whoami You are deis at
Note
Session and client configuration is stored in the
~/.deis/client.json file.
By default, new users are not allowed to register after an initial user does. That initial user becomes the first "admin" user. Others will now receive an error when trying to register, but when logged in, an admin user can register new users:
$ deis register --login=false --username=newuser --password=changeme123 [email protected]
After creating your first user, you may wish to change the registration mode for Deis Workflow.
Deis Workflow supports three registration modes:
To modify the registration mode for Workflow you may add or modify the
REGISTRATION_MODE environment variable for the
controller component. If Deis Workflow is already running, use:
kubectl --namespace=deis patch deployments deis-controller -p '{"spec":{"template":{"spec":{"containers":[{"name":"deis-controller","env":[{"name":"REGISTRATION_MODE","value":"disabled"}]}]}}}}'
Modify the
value portion to match the desired mode.
Kubernetes will automatically deploy a new ReplicaSet and corresponding Pod with the new environment variables set.
You can use the
deis perms command to promote a user to an admin:
$ deis perms:create john --admin Adding john to system administrators... done
View current admins:
$ deis perms:list --admin === Administrators admin john
Demote admins to normal users:
$ deis perms:delete john --admin Removing john from system administrators... done
A user can change their own account's password like this:
$ deis auth:passwd current password: new password: new password (confirm):
An administrator can change the password of another user's account like this:
$ deis auth:passwd --username=<username> new password: new password (confirm): | https://docs.teamhephy.com/users/registration/ | 2019-06-15T23:41:05 | CC-MAIN-2019-26 | 1560627997501.61 | [] | docs.teamhephy.com |
/Rebuild (devenv.exe)
Cleans and then builds the specified solution configuration.
devenv SolutionName /rebuild SolnConfigName [/project ProjName] [/projectconfig ProjConfigName]
Arguments.
Remarks.
Example
This example cleans and rebuilds the project CSharpConsoleApp, using the Debug project build configuration within the Debug solution configuration of MySolution.
devenv "C:\Documents and Settings\someuser\My Documents\Visual Studio\Projects\MySolution\MySolution.sln" /rebuild Debug /project "CSharpWinApp\CSharpWinApp.csproj" /projectconfig Debug
See Also
Reference
Devenv Command Line Switches | https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2008/s2h6xst1%28v%3Dvs.90%29 | 2019-06-15T23:15:53 | CC-MAIN-2019-26 | 1560627997501.61 | [] | docs.microsoft.com |
Customizing Row Details
The Row Details are represented by a DetailsPresenter control. By default, the DetailsPresenter control is styled according to the current theme. Its style includes properties such as Background, BorderBrush and others. Use the RowDetailsStyle property of the RadGridView, if you want to customize its appearance.
To learn how to do this take a look at the Styling the Row Details topic. | https://docs.telerik.com/devtools/silverlight/controls/radgridview/row-details/customizing-the-row-details | 2019-06-15T23:39:24 | CC-MAIN-2019-26 | 1560627997501.61 | [array(['images/RadGridView_RowDetails_6.png',
'Telerik Silverlight DataGrid RowDetails 6'], dtype=object)] | docs.telerik.com |
QuickstartQuickstart
This quick example demonstrates the setup of an Eventide Postgres project, configuring the message store database connection, and basic reading and writing of a message to a stream in the Postgres message store
Software PrerequisitesSoftware Prerequisites
- Ruby (minimum version: 2.4)
- Postgres (minimum version: 9.5)
- Git (minimum version: 2)
- GCC (required for installing the PG gem)
SetupSetup
The quickstart demo code is hosted on GitHub at:
Clone the Quickstart RepositoryClone the Quickstart Repository
From the command line, run:
git clone [email protected]:eventide-examples/quickstart.git
Change directory to the project's directory:
cd quickstart
Install the GemsInstall the Gems
All examples of components built using Eventide that are produced by the Eventide Project's team install gem dependencies using Bundler's standalone mode:
Rather than install the Eventide toolkit into the system-wide registry, we recommend that you install the gems into the directory structure of this project.
This example project includes a script file that will install the gems correctly.
To install the gems, run at the command line:
./install-gems.sh
This installs the gems in a directory named
./gems, and generates the setup script that is already used by the example code here to load the gems in standalone mode.
Start PostgresStart Postgres
If you've installed Postgres through Homebrew on Mac:
launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.postgresql.plist
If you don't want to start it as a daemon, open a new terminal window and run:
postgres -D /usr/local/var/postgres
Note that closing the terminal window will cause the database to shut down. If you've started the database with
Postgres -D, keep the terminal window open for as long as you need to use the database.
On nix:
sudo systemctl start postgresql
Note for Linux UsersNote for Linux Users
Postgres installations on Linux can vary between OS distributions as well as the package manager used.
When Postgres is installed on a Linux machine, it is often configured by default for security considerations that reflect server operations. Make sure that you are aware of the runtime requirements of Postgres on your environment if you are running Linux.
In particular, many Postgres setups on Linux require passwords for all database connections over TCP. This will either need to be disabled, or passwords will have to be configured for the role used during the workshop exercises.
Assure the Postgres Connection SettingsAssure the Postgres Connection Settings
Postgres connection settings can be found in
settings/message_store_postgres.json
The provided settings should work for the majority of development environment setups. If you run Postgres on your machine with access control enabled, on a non-default port, etc, you can adjust the settings for your installation.
Create the Message Store Postgres DatabaseCreate the Message Store Postgres Database
Note: The
message_store_postgres.json settings file does not configure the connection used for any database administrative tasks, including creating the message store schema or printing reports. The administrative connection is controlled by the facilities provided by Postgres itself. For more details, see:.
With Postgres already running, from the command line, run:
bundle exec evt-pg-create-db
For more background on the Postgres message store database, you can explore the SQL scripts at:
Test the Database ConnectionTest the Database Connection
The quickstart project includes a Ruby file that creates a Session object, and executes an inert SQL command to test the connection.
If the connection is made, the script will print: "Connected to the database"
From the command line, run:
ruby demos/connect_to_database.rb
Run the Write and Read DemoRun the Write and Read Demo
The project's
demos/write_and_read.rb script file defines a message class, constructs the message, assigns data to message object's attribute, writes that message to the message store, and then reads from the message store and prints it.
To run this demo, from the command line, run:
ruby demos/write_and_read.rb
List the Messages in the Message Store DatabaseList the Messages in the Message Store Database
Now that a message has been added to the message store, you can list the contents of the message store using a command line tool that is included with the Eventide toolkit.
From the command line, run:
bundle exec evt-pg-list-messages
Clear the Messages from the Message Store DatabaseClear the Messages from the Message Store Database
There is no tool purpose-built for removing messages from the message store. However, by recreating the message store database, you can effect the same outcome.
You can recreate the message store database using a command line tool that is included with the Eventide toolkit.
From the command line, run:
bundle exec evt-pg-recreate-db | http://docs.eventide-project.org/examples/quickstart.html | 2019-06-15T23:38:43 | CC-MAIN-2019-26 | 1560627997501.61 | [] | docs.eventide-project.org |
-= Operator (Visual Basic)
Subtracts the value of an expression from the value of a variable or property and assigns the result to the variable or property.
variableorproperty -= expression
Parts
variableorproperty
Required. Any numeric variable or property.
expression
Required. Any numeric subtract one Integer variable from another and assign the result to the latter variable.
Dim var1 As Integer = 10 Dim var2 As Integer = 3 var1 -= var2 ' The value of var1 is now 7.
See Also
Concepts
Reference
- Operator (Visual Basic)
Arithmetic Operators (Visual Basic)
Operator Precedence in Visual Basic
Operators Listed by Functionality | https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2008/4wdszwd9%28v%3Dvs.90%29 | 2019-06-15T23:33:11 | CC-MAIN-2019-26 | 1560627997501.61 | [] | docs.microsoft.com |
CREDIT: DRG
Please note this content is not available for licensing from Barcroft Media Ltd.
With unprecedented access, ‘Meet The Mormons’ follows a young British Mormon as he gives up two years of his life and goes off to convert the people of Leeds in a rite of passage expected since birth. For 20-year-old Josh Field from Sussex, it’s an emotional journey full of sacrifice. For two whole years he must surrender entirely to church rules, he’s banned from seeing his family and friends and he has to be in the presence of a fellow missionary at all times. | http://docs.barcroft.tv/meet-the-mormons-sussex-christian-missionaries | 2019-06-15T23:23:10 | CC-MAIN-2019-26 | 1560627997501.61 | [] | docs.barcroft.tv |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.