content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Kore web framework
Kore is an easy to use web platform for writing scalable web APIs in C. Its primary goals are security, scalability and allowing rapid development and deployment of such APIs.
Because of this Kore is an ideal candidate for building robust, scalable and secure web things.
This Gitbook documentation is for the 3.2.0 release.
Features
- Supports SNI.
- Supports HTTP/1.1.
- Websocket support.
- TLS enabled by default.
- Optional background tasks.
- Built-in parameter validation.
- Fully privilege separated by default.
- Optional asynchronous PostgreSQL support.
-.
Architecture overview
| https://docs.kore.io/3.2.0/ | 2021-09-16T18:31:56 | CC-MAIN-2021-39 | 1631780053717.37 | [array(['arch.png', None], dtype=object)] | docs.kore.io |
How-tos / Guides
- How to Remove "Downloads" from My Account
- How to set featured products
- How to add/set a Favicon
- How Related Products works
- How to add Product variations
- How to edit shop header with UX Builder
- How to change homepage
- How to Disable Admin bar for customers
- Getting UX Banner video to work in all browsers
- How to use Flatsome Studio
- How to disable reviews
- Ninja Forms documentation
- Need Customisation
- How to open images and galleries in a lightbox
- How to register the theme (with token)
- How to create an admin login for the support team
- How to create a Mega Menu (full customizable dropdown)
- How to create custom 404 page content
- How to reorder tabs on the single product page
- Problems installing theme | https://docs.uxthemes.com/category/42-how-tos-guides/2?sort=popularity | 2021-09-16T19:16:45 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.uxthemes.com |
Cluster Execution¶
Snakemake can make use of cluster engines that support shell scripts and have access to a common filesystem, (e.g. the Sun Grid Engine). In this case, Snakemake simply needs to be given a submit command that accepts a shell script as first positional argument:
$ snakemake --cluster qsub --jobs 32
Here,
--jobs denotes the number of jobs submitted to the cluster at the same time (here 32).
The cluster command can be decorated with job specific information, e.g.
$ snakemake --cluster "qsub {threads}"
Thereby, all keywords of a rule are allowed (e.g. rulename, params, input, output, threads, priority, resources, …).
For example, you could encode the expected running time in minutes into a resource
runtime_min:
rule: input: ... output: ... resources: runtime_min=240 shell: ...
and forward it to the cluster scheduler:
$ snakemake --cluster "qsub --runtime {resources.runtime}"
In order to avoid specifying
runtime_min for each rule, you can make use of the
--default-resources flag, see
snakemake --help.
If your cluster system supports DRMAA, Snakemake can make use of that to increase the control over jobs.
E.g. jobs can be cancelled upon pressing
Ctrl+C, which is not possible with the generic
--cluster support.
With DRMAA, no
qsub command needs to be provided, but system specific arguments can still be given as a string, e.g.
$ snakemake --drmaa " -q username" -j 32
Note that the string has to contain a leading whitespace. Else, the arguments will be interpreted as part of the normal Snakemake arguments, and execution will fail.
Adapting to a specific cluster can involve quite a lot of options. It is therefore a good idea to setup a a profile.. name, rulename, threads, input, output, params etc.) as JSON inside the job script (for group jobs, the rulename will be “GROUP”, otherwise it will be the same as the job name). For convenience, there exists a parser function snakemake.utils.read_job_properties that can be used to access the properties. The following shows an example job submission wrapper:
#!python #!)) | https://snakemake.readthedocs.io/en/stable/executing/cluster.html | 2021-09-16T18:51:20 | CC-MAIN-2021-39 | 1631780053717.37 | [] | snakemake.readthedocs.io |
Device Restore over USB
This experimental tool works like
particle update in the Particle CLI, except it works from your browser (no software install required) and it can upgrade or downgrade to different versions of Device OS. It works with both Gen 2 and Gen 3 devices.
There is also a version that implements Device Restore over JTAG that works with the Particle debugger. It can restore devices that do not have a working bootloader (Dim D7 on Gen 2 devices) or have been completely erased.
Important caveats:
- This tool is experimental, and may not work properly.
- It updates Device OS, the bootloader, soft device (on Gen 3), and Tinker (or Tracker Edge for Tracker devices) using DFU mode (blinking yellow).
-.
- If you get an USB device not selected error on Windows, you may have a Windows device driver issue that is hard, but not impossible, to fix.
- If you get an USB device not selected error on Linux, you may need a udev rule. Download 99-particle.rules and copy it to /etc/udev/rules.d and reboot.
- It does not work on iOS (iPhone or iPad) as the hardware does not support USB OTG.
Setup
- Connect your Particle device by USB to your computer.
- Or, for some Android phones, use an USB OTG adapter between the Particle device and your phone.
Restore!
Special Notes for Downgrading
Boron LTE BRN402 and B Series SoM B402
If you are downgrading a Boron LTE (BRN402) or B Series SoM B402 from Device OS 2.0.0 or later, to 1.5.1 or earlier, you must first install 1.5.2, allow the device to boot and connect to cellular before downgrading again to an earlier version. The reason is that 2.0.0 and later use a higher baud rate for the cellular modem, and on the SARA-R410M only, this setting is persistent. Older versions of Device OS assume the modem is using the default of 115200 and will fail to communicate with the modem since it will be using 460800. Device OS 1.5.2 uses 115200, however it knows it can be 460800 and will try resetting the baud rate if it can't communicate with the modem. | https://docs.particle.io/tools/device-programming/device-restore-usb/ | 2021-09-16T18:55:54 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.particle.io |
Create a Master and Child Page¶
Note
Skuid is assessing replacement language for “master / child” relationships. Because this language occurs throughout the application, expect to see new terminology in the Skuid winter release.
Have you created a great page with an effective layout, a branded header, and solidly working models? Re-use that page—without having to rebuild it—by designating it as a master page. Think of master pages as templates for child pages, allowing builders to create a consistent, branded experience. Use master pages to craft layouts and add resources—including custom labels or JavaScript—that will display and be accessible within any child pages.
Let’s see how it works by creating a master page, and then using it to create a child page.
Create a Master Page¶
First, create a page and configure it.
Build the master [[]]¶
Name: Branded_master.
API Version: Select the API version for the page.
Note
The API version must be the same for the master page and for the subsequent child pages.
Starting Point: Start with a blank page.
Note
It is possible to use any of the starting points and then designate the page as a master page.
ClickCreate Page.
The new page opens.
- Click on the page’s header within the canvas (just below the App Composer Palette).
- In the Basic tab of the Properties pane, click Available as Master Page.
- Click Save.
Populate the Master Page¶
Now that the page is designated as a master page, it’s important to do some design thinking. What elements do you want to have on every subsequent child page? For this example, let’s give the master page some branding.
Add content to the header [[]]¶
Drag and drop a Responsive Grid into the Header.
Click the Responsive Grid, then click Add Division twice. Now there are three divisions in the grid.
Drag an Image component into the first division of the grid. Use this to display a logo.
Drag a Navigation component into the page and drop it into the second division of the grid.
- Add navigation items that link to other pages.
Drag a Search component into the page and drop it into the third division of the grid.
Click Save and then Preview.
If necessary, adjust the Responsive Grid’s divisions using Division Properties > Flex Ratio.
Note
A good place to start? Set the logo division to 1, the navigation division to 4, and the search division to 2—then modify as needed.
Add a Wrapper [[]]¶
Drag and drop a Wrapper component onto the page between the header and footer.
Customize the Wrapper by adding a colored background:
Note
You can also select an image to use as the background.
- Click Styles and select Background. Edit the following:
- Background Type: Color
- Color: Select a color.
Click Save.
This background color will display on all child pages.
Add a Page Region [[]]¶
If you used the master page created above to create a child page, that child page would include the Navigation and the Wrapper components. But you would not be able to add any other components to the child page. Why?
Think of the components added to the master page as being locked (or frozen). You can add, delete, or edit them within the master page—but they are untouchable from within the child page.
To add other components to child pages, include a Page Region in the master page. This region is not configurable within the master page—you cannot add components to it. But it is fully configurable from the master’s child pages.
Once a page is designated a master, the Page Region component displays as an additional entry under the Advanced Component group in the App Elements pane.
Add a Page Region into the page within the Wrapper, but below the Navigation component. Adding the region within the wrapper means that the wrapper’s customized color (or image) will provide a background for components added to the page region in the child page.
Note
It’s possible to add more than one Page Region to a master page.
Click Save.
Now you have a master page that provides top-level navigation, a branded background, and a copyright notice. Let’s use that master to create a page that lists accounts.
Create a Child Page from the Master¶
Time to create a child page, in this case, one that displays account information.
Note
- Child pages must be created from scratch.
- They cannot become a master page for other child pages.
Create the page [[]]¶
Create a new page and configure it:
Name: Name the page Branded_childAccount.
API Version: Select the API version for the page.
Note
The API version must be the same as for the master page.
Starting Point: Use a master page.
- Master Page: Start typing Branded_master in the field. Skuid begins to autofill with files that match.
ClickCreate Page.
Preview the page.
Even though you have just created the page and have added nothing, the Navigation component displays at the top, with a background in the color you selected. This is because all of those configurations were inherited from the master page—but the page needs account content added.
Populate the page [[]]¶
- Create at least one model on a data source object that collects information on accounts.
- Drag and drop a Table component into the Page Region.
- Model: The model created in Step 1.
- Click Add Fields to include a few fields on the table, such as the account’s name, industry, potential revenue, possibly a description field.
- Click Save and then Preview.
The Result¶
The page now has a navigation bar at the top, a Table listing accounts beneath the navigation, a colored background for the body of the page, and a copyright notice at the bottom. The page-specific information (the accounts Table) has been added to the content from the master page.
Managing Master and Child pages¶
- Need to edit the components or elements that display on every child page? These are managed from the master page; open that page and edit the items.
- After making changes to a master page, be sure to refresh any open child pages to see those changes.
- Need to edit page specific content on child pages? Open the individual child page and edit items in the Page Region.
- You cannot convert an established page into a child page because child pages have a different XML structure. It is possible to do this manually by copying and pasting portions of the master page’s XML into the child page’s XML. | https://docs.skuid.com/latest/v1/en/skuid/pages/master-child-pages.html | 2021-09-16T17:57:27 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.skuid.com |
Service Mesh (Beta)
Page last updated:
This topic describes service mesh for Cloud Foundry Application Runtime (CFAR).
To deploy service mesh, see Deploying Service Mesh (Beta).
Overview
CFAR includes an optional beta routing plane that uses a service mesh. A service mesh provides traffic management, security, and observability for microservices. For more information, see What is a service mesh? in the Istio documentation.
Service mesh in CFAR uses Istio Pilot and Envoy. The Cloud Foundry
istio-release packages these components into a BOSH release. For more information, see:
- It does not have feature parity with the existing routing plane in CFAR.
- It is for deployments with fewer than 20,000 routes. At greater scale, it can impact core platform functions.
- The control plane is not highly available and registration of new routes can be delayed during an upgrade.
- The domain for routes is
*.mesh.YOUR-APPS-DOMAINand is not configurable.
Component VMs
The following table describes each component VM deployed as part of service mesh in CFAR, along with their function.Create a pull request or raise an issue on the source for this page in GitHub | https://docs.cloudfoundry.org/adminguide/service-mesh.html | 2020-01-17T21:35:25 | CC-MAIN-2020-05 | 1579250591234.15 | [array(['images/service-mesh.png',
'A load balancer receives requests at *.YOUR-APPS-DOMAIN and forwards them to the Gorouter. The Gorouter then forwards them to an app. A separate load balancer receives requests at *.mesh.YOUR-APPS-DOMAIN and forwards them to the istio-router. The istio-router then forwards them to an app'],
dtype=object) ] | docs.cloudfoundry.org |
quote EntityType
Applies To: Dynamics 365 (online), Dynamics 365 (on-premises), Dynamics CRM 2016, Dynamics CRM Online
Description: Formal offer for products and/or services, proposed at specific prices and related payment terms, which is sent to a prospective customer.
Entity Set path:[organization URI]/api/data/v8.2/quotes
Base Type: crmbaseentity EntityType
Display Name: Quote
Primary Key: quoteid
Primary Name Attribute: name
Properties
Lookup Properties
Single-valued navigation properties
Collection-valued navigation properties
Operations bound to quote
Operations using the quote quote
The following operations are bound to the quote entity type.
Operations using the quote entity type.
The following operations use the quote | https://docs.microsoft.com/en-us/previous-versions/dynamicscrm-2016/developers-guide/mt607887%28v%3Dcrm.8%29 | 2020-01-17T22:25:57 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.microsoft.com |
ActiveRegions and Showing ToolTips
The ActiveRegion property corresponds to the bounds of a RadChart's item. It is useful to show tooltips and it also contains an additional string property named Url. ActiveRegion appears for many of the UI elements in the chart including labels, chart series, chart series items, chart title, legend, axis items and the empty series message.
The ActiveRegion property has two significant for WinForms properties:
ToolTip: A text description of an item's area that is displayed when the mouse hovers over it.
Url: A string property which can be used for an additional information or in a scenario where you would want to open an external website
Set ToolTips
To set a ToolTip, select an item that you are interested in. Then navigate to its ActiveRegion property. In the ToolTip enter descriptive text.
Below is the ActiveRegion for a chart series item. If the mouse hovers over that area in the chart the tool tip "Sales - January" will display.
| https://docs.telerik.com/devtools/winforms/controls/chart/advanced-topics/activeregions-and-showing-tooltips | 2020-01-17T21:52:13 | CC-MAIN-2020-05 | 1579250591234.15 | [array(['images/chart-advanced-topics-active-regions-and-showing-tooltips001.png',
'chart-advanced-topics-active-regions-and-showing-tooltips 001'],
dtype=object) ] | docs.telerik.com |
See the Message Reference for allowed message types and combinations. See also: Message and Composer Types and Composer Overview.
You can configure the default appearance of your In-App Messages in Settings » Configuration » In-App Message Design. This includes button text and message color, screen position, and more.
Click Create, and select A/B Test.
(Optional) Add a name and/or flag as a test.
- Click in the header.
- Enter the name.
- Enable Test.
- Click outside the box to close it.
See also: Name a Message and Flag a Message as a Test.
Define your A/B test Audience. This is the entire set of users for the A/B test, which includes the control group (users who receive no message) and users who will receive different messages, according to the variants that you define.
Set the. Only channels available for A/B testing are listed..
Click Variants in the header to move on.
Set up your message Variants.
Enter a descriptive Test Name if you didn't enter a name in the Audience step.
Select the Number of Variants for your test, up to 26. You can add or delete variants in later steps.
Use the slider to adjust the percentages of your target audience vs. control group. By default we send your test to 80% of your audience, keeping a control group of 20%
Click Content in the header to move on.
Add your message Content. The left side of the screen has a tab for each variant. Start with variant A. Message content options may change based on the channels and message types in your message.
Choose a message type, then click Add Content. The message type selection screen only appears if you selected app channels.
Enter a Variant Name..
Configure additional variants. You can add or delete variants by clicking and .
Select a variant using the lettered tabs.
Copy content from an existing variant, or start with a blank message. If copying content, select a variant, then click Continue.
Fill out the message variant including text, action, and options — like you did for variant A.
Click Delivery in the header to move on.
Set Delivery options for your A/B test.
Select a delivery type.
Send nowSend the messages immediately after review.
ScheduleChoose an exact time of day to send the message. Enter a date in YYYY-MM-DD format and select the time and time zone.
Set Optional Features.
Click Review in the header to move on.
Review your A/B test.
Select a variant in the Content section or above the preview.
Review the device preview and message summary.Click the arrows to page through the various previews. The channel and display type dynamically update in the dropdown menu above. You can also select a preview directly from the dropdown menu.
If you would like to make changes, click the associated step in the header, make your changes, then return to Review.
Click Send Message or Schedule Message. | https://docs.airship.com/tutorials/orchestration/a-b-test/ | 2020-01-17T22:15:22 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.airship.com |
Utility - exportJobDataByPropertyValue
Utility - exportJobDataByPropertyValue
Description :
This command exports the job data that meets the specified condition (property with a specific value) into a file in CSV format.
The condition may contain any property of any type except of DeployURLType or other types that are not supported by the execution task's override job properties mechanism.
The data includes jobs defined with this condition and job runs that were executed with this condition (by the override job properties of execution task mechanism).
Use case example: Suppose you used the Property Dictionary to add a property called ticketId to the SnapshotJob property class. (Property Dicitonary => Built-in Property Classes => Job => Snapshot Job). Each time you ran a Snapshsot Job, you provided a value for the ticketId property. You can now use the exportJobDataByPropertyValue command to retrieve data about all the jobs that were run where ticketId was set to a specific value.
Note: This command supports properties created for all job types (that is, properties created through the Property Dictionary in the Built-in Property Classes > Job property class), but does not support properties created for an individual job type (that is, in any of the property classes under the Job property class).
Return type : java.lang.Void
Command Input :
Example
The following example shows how to export the job's data that meets the specified condition (contains a property by the name ticketId with the value "111") into a file called jobData.csv.
Script
Utility exportJobDataByPropertyValue ticketId equals 111 C:/tmp/jobData.csv | https://docs.bmc.com/docs/blcli/87/utility-exportjobdatabypropertyvalue-595585346.html | 2020-01-17T23:13:58 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.bmc.com |
.
Our policy regarding API changes and release versioning
The driver versions follow semantic versioning.
Major releases
Example:
2.0.0
Regarding
major releases, any public component of the driver might be changed or removed although we will always try to avoid introducing significant changes that would make it significantly harder for an application to upgrade to the new
major version.
Minor releases
Example:
2.7:
2.1.1
These releases only contain bug fixes so they will never contain changes to the driver’s public API.
2.7
A lot of new methods were added to some interfaces on this release, mostly to
ISession,
IMapper and
ICluster. Note that
IDseCluster inherits from
ICluster and
IDseSession inherits from
ISession so these are affected as well.)
2.6.
2.3
The
DowngradingConsistencyRetryPolicy is now deprecated. It will be removed in the following major version of the
driver.
The main motivation is the agreement that this policy, and
should they decide that the downgrading behavior of
DowngradingConsistencyRetryPolicy remains a good fit for certain
use cases, they will now have to implement this logic themselves, either at application level, or alternatively at
driver level, by rolling out their own downgrading retry policy. | https://docs.datastax.com/en/developer/csharp-dse-driver/2.8/upgrade-guide/ | 2020-01-17T22:38:24 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.datastax.com |
Graduate Catalog
Graduate School of Education and Counseling
General Information
Degrees Offered | Admissions | Tuition and Fees | Faculty and Staff | Policies and Procedures
Counseling Psychology
Department Home | Courses
Art Therapy | Marriage, Couple, and Family Therapy | Professional Mental Health Counseling | Professional Mental Health Counseling—Specialization in Addictions | School Psychology | Eating Disorders Certificate | Ecopsychology Certificate
Educational Leadership
Department Home | Courses
Doctor of Education in Leadership | Educational Administration | School Counseling | Student Affairs Administration
Teacher Education
Department Home | Courses
Elementary—Multiple Subjects | Secondary | Educational Studies |
Curriculum and Instruction | ESOL | Reading Intervention | Special Education | Teacher Leadership for Equity and Social Justice Certificate | Certificate in the Teaching of Writing
Other
- Convocation
- Oregon Writing Project (OWP) Courses
- Writing and Creative Media (WCM) Courses (offered through the Northwest Writing Institute)
- Equity Certificate for School Leaders
Other Catalogs
Graduate Catalog Archive | Current Undergraduate Catalog. | https://docs.lclark.edu/graduate/ | 2020-01-17T21:49:12 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.lclark.edu |
Your application code reads and writes datapoint and property data from and to your device interface as part of its algorithm. For example, your application may sample a temperature sensor connected to physical I/O lines, process the data received, and provide the processed data to the network using an output datapoint with a SNVT_temp_f type, in this example implemented with an Open Loop Sensor block.
Example
The following example implements an Open Loop Sensor block with a SNVT_temp_f datapoint and adds the optional nciGainproperty.
#include "IzotDev.h" SFPTopenLoopSensor(sensor, SNVT_temp_f) sensor; //@IzoT block implement(nciGain) void sample_io(void) { float current = get_sensor_data(); current *= sensor.nciGain->multiplier; current /= sensor.nciGain->divider; sensor.nvoValue.data = current; IzotPropagate(sensor.nvoValue); } int main(void) { IzotInit(); while(1) { IzotEventPump(); your_algorithm();} return 0; }
You can access block datapoint members through the member name as defined within the profile, e.g., sensor.nvoValue. The datapoint is implemented within the block datapoint member, and can be accessed with the data attribute, e.g., sensor.nvoValue.data = 1234.
Other attributes provided with each block datapoint member include the global_index attribute and properties which apply to the datapoint member.
You can access block property members also through the member names defined in the profile (e.g., sensor.nciGain). This produces a property pointer instead of a datapoint member. Properties are accessed through pointers because properties may be implemented in different ways, which can require grouping of property values separately from the block.
Even though properties can be implemented as property datapoints (configuration network variables), a property does not generally have attributes such as global_index. | https://docs.adestotech.com/display/DrftIzot/Accessing+the+IzoT+Device+Interface | 2021-05-06T11:54:06 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.adestotech.com |
GeoDistanceDropdown creates a location search based dropdown UI component that is connected to a database field. It is used for distance based filtering.
Example uses:
- finding restaurants in walking distance from your location.
- discovering things to do near a landmark.
Usage
Basic Usage
<GeoDistanceDropdown componentId="LocationUI" dataField="location" data={[ { distance: 20, label: '< 20 miles' }, { distance: 50, label: '< 50 miles' }, { distance: 100, label: '< 100 miles' }, ]} />
Usage With All Props
<GeoDistanceDropdown componentId="locationUI" dataField="location" title="Location Dropdown Selector" data={ [ { "distance": 20, "label": "< 20 miles" }, { "distance": 50, "label": "< 50 miles" }, { "distance": 100, "label": "< 100 miles" }, ] } defaultValue={{ location: "London, UK" label: "< 100 miles" }} countries={["uk"]} placeholder="Select a distance range.." unit="mi" autoLocation={true} showFilter={true} filterLabel="Location".
- data
Object Arraycollection of UI
labelswith associated
distancevalue.
- title
String or JSX[optional] title of the component to be shown in the UI.
- defaultValue
Object[optional] pre-select values of the search query with
labeland
locationkeys.
- placeholder
String[optional] set the placeholder to show in the location search box, useful when no option is
defaultValue.
- value
Object[optional] controls the current value of the component. It sets the item from the list & also sets the location
- unit
String[optional] unit for distance measurement, uses
mi(for miles) by default. Distance units can be specified from the following:
- autoLocation
Boolean[optional] when enabled, preset the user's current location in the location search box. Defaults to
true.
- from the dropdown. This is useful for sharing URLs with the component state. Defaults to
false.
- countries
String Array[optional] restricts predictions to specified country (ISO 3166-1 Alpha-2 country code, case insensitive). For example, 'us', 'in', or 'au'. You can provide an array of up to five country code strings.
- serviceOptions
Object[optional] allows to add more options to AutoCompletionRequest, available from Google Places library
Demo
Styles
GeoDistanceDropdown component supports
innerClass prop with the following keys:
title
input
list
icon
count
Extending
GeoDistanceDropdown component can be extended to
- customize the look and feel with
className,
style,
- update the underlying DB query with
customQuery,
- connect with external interfaces using
beforeValueChange,
onValueChangeand
onQueryChange.
- specify how options should be filtered or updated using
reactprop.
add the following synthetic events to the underlying
inputelement:
- onBlur
- onFocus
- onKeyPress
- onKeyDown
- onKeyUp
- autoFocus
<GeoDistanceDropdown ... className="custom-class" style={{"paddingBottom": "10px"}} customQuery={ function(location, distance, props) { return { // query in the format of Elasticsearch Query DSL geo_distance: { distance: distance + props.unit, location_dataField: location } } } } GeoDistanceDropdown component.
- customQuery
Functiontakes location, distance and props as parameters and returns the data query to be applied to the component, as defined in Elasticsearch Query DSL.
Note:customQuery is called on value changes in the GeoDistanceDropdown component as long as the component is a part of
reactdependency of at least one other component.
- within a specific location GeoDistanceDropdown's options. Read more about it here.
GeoDistance Dropdown with all the default props | https://docs.appbase.io/docs/reactivesearch/v3/map/geodistancedropdown/ | 2021-05-06T13:16:11 | CC-MAIN-2021-21 | 1620243988753.97 | [array(['https://i.imgur.com/zRnIUWY.png', 'Image to be displayed'],
dtype=object) ] | docs.appbase.io |
DataStore¶
A DataStore is used to access and store geospatial data in a range of vector formats including shapefiles, GML files, databases, Web Feature Servers, and other formats.
References:
Create¶
It is not recommended to create a DataStore by hand; instead we make use of a FactoryFinder which will locate the correct plugin supporting the requested format.
We have three factory finders to choose from:
DataAccessFinder is used to acquire a DataAccess
DataAccessFinder will find out both those DataStores found by DataStoreFinder and the ones that only implement DataAccess but not DataStore.
DataStoreFinder is used to acquire a DataStore
FileDataStoreFinder is limited to working with FileDataStoreFactorySpi where a clear file extension is available
Depending on if we are connecting to existing content, or asking for a new file to be created we will handle things a little differently.
Access
We are just going to focus on the most common case right now - getting access to an existing Shapefile. Rest assured that what you learn here will work just fine when talking to a real database like PostGIS or a web service like WFS.
To create our shapefile we are going to use the DataStoreFinder utility class. Here is what that looks like:
File file = new File("example.shp"); Map map = new HashMap(); map.put( "url", file.toURL() ); DataStore dataStore = DataStoreFinder.getDataStore(map );
Create
To create a new shapefile on disk we are going to have to go one level deeper and ask FileDataStoreFinder for a factory matching the
shpextension.:
FileDataStoreFactorySpi factory = FileDataStoreFinder.getDataStoreFactory("shp"); File file = new File("my.shp"); Map map = Collections.singletonMap( "url", file.toURL() ); DataStore myData = factory.createNewDataStore( map ); FeatureType featureType = DataUtilities.createType( "my", "geom:Point,name:String,age:Integer,description:String" ); myData.createSchema( featureType );
Factory
We can repeat the example of creating a new shapefile, just using DataStoreFinder to list the available implementations and then see which one is willing to create shapefile.
This time you need to do the work by hand.:
File file = new File("my.shp"); Map map = new HashMap(); map.put( "url", file.toURL()); for( Iterator i=DataStoreFinder.getAvailableDataStores(); i.hasNext(); ){ DataStoreFactorySpi factory = (DataStoreFactorySpi) i.next(); try { if (factory.canProcess(params)) { return fac.createNewDataStore(params); } } catch( Throwable warning ){ System.err( factory.getDisplayName() + " failed:"+warning ); } }
As you see, the logic merely returns a DataStore from the first factory that can create one for us.
These examples bring up a couple of questions:
Q: Why use a DataStoreFinder
We are using a FactoryFinder (rather than just saying new ShapefileDataStore ) so GeoTools can have a look at your specific configuration and choose the right implementation for the job. Store implementation might change in time, accessing the stores via the factory ensures your client code does not need to be changed when this happens.
Q: What do we put it the Map?
That’s a hard question which forces us to read the documentation:
- doc
shape (user guide)
ShapefileDataStoreFactory (javadocs)
This information is also available at runtime via the DataStoreFactorySpi,getParameterInfo() method. You can use this information to create a user interface in a dynamic application.
Catalog¶
If you are working with GeoServer or uDig you have access to some great facilities for defining your DataStore before you create it. Think of it as a “just in time” or “lazy” DataStore.:
Catalog catalog = new DefaultCatalog(); ServiceFinder finder = new DefaultServiceFactory( catalog ); File file = new File("example.shp"); Service service = finder.acquire( file.toURI() ); // Getting information about the Shapefile (BEFORE making the DataStore) IServiceInfo info = service.getInfo( new NullProgressListener() ); String name = info.getName(); String title = info.getTitle().toString(); // Making the DataStore DataStore dataStore = service.resolve( DataStore.class, new NullProgressListener() );
The idea works similar to a “file handle”, you can make a
IService “handle” that represents your DataStore (and you can ask the handle several fun questions like “what is your name”) before you actually create the beast.
This separation is really important in an application expecting to talk about thousands of sources of data at a time. Just because your application wants to know about a source of data does not always mean you need a DataStore yet.
A Catalog works with two important bits of information:
URI - is the unique name of the data
Map - is used to create a DataStore just in time using DataStoreFactoryFinder
The nice thing is that for many easy cases the catalog is smart enough to figure out the Map just from the URI.
Careful¶
Don’t Duplicate¶
DataStore’s represent a live connection to your file or database:
Don’t create and throw away DataStores, or make Duplicates
DataStores are BIG heavy-weight objects - many of them juggle database connection or load up spatial indexes on your behalf.
Please keep your DataStore around for reuse
Manage them as a Singleton
Manage them in a Registry
Manage them in an application specific Catalog
For more details please Repository
Direct Access¶
You can also dodge the FactoryFinder and make use of the following quick hacks.
This is not wise (as the implementation may change over time) but here is how it is done.
Use ShapefileDataStore:
File file = new File("example.shp"); DataStore shapefile = new ShapefileDataStore( example.toURL()); shapefile.setNamespace(new URI("refractions")); shapefile.setMemoryMapped(true ; String typeName = shapefile.getTypeName(); // should be "example" FeatureType schema = shapefile.getSchema( typeName ); // should be "refractions.example" FeatureSource contents = shapefile.getFeatureSource( typeName ); int count = contents.getCount( Query.ALL ); System.out.println( "Connected to "+file+ " with " + count );
This hack may be fine for a quick code example, but in a real application can we ask you to use the DataStoreFactoryFinder. It will let the library sort out what implementation is appropriate.
Use ShapefileDataStoreFactory:
FileDataStoreFactorySpi factory = new ShapefileDataStoreFactory(); File file = new File("example.shp"); Map map = Collections.singletonMap( "url", file.toURL() ); DataStore dataStore = factory.createDataStore( map );
This hack is a little bit harder to avoid - since you do want to use the factory directly in some cases (e.g. when creating a brand new file on disk). If possible ask the DataStoreFactoryFinder for all available factories (so you can make use of what is available at runtime). | https://docs.geotools.org/latest/userguide/library/data/datastore.html | 2021-05-06T13:27:58 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.geotools.org |
When it comes to the CF surveys, handling customer feedback is probably the most important thing to focus your efforts on.
How to deal with Unhappy customers
The unhappy customers are the group of customers that need the most attention.
These are customers/ users at risk of churning and a large portion of them will not buy from you again, probably because you they are not really satisfied with the experience.
Address immediate issues that they highlight in their feedback and follow up to dig deeper if they did not leave a comment.
In the example above, you can see that Peter Griffin was totally disappointed with his last order and he gave a score of 1 with the short explanation, "Really bad service. Never buying again."
The thing that you can do is just ask him, "What we can do better?" or "Tell us more about your poor customer experience."
How to deal with Neutral customers
The neutral customers are the customers who do not need a lot of efforts, but just to ask them: "Please, let us know what we can do to earn your recommendation in the future?"
This way, you can get even more valuable feedback about their experience and what it takes to make them Happy customers.
How to deal with Happy customers
The happy customers are the most important group.
You should definitely thank them and take extra steps to keep them happy. You can even send them a gift or recruit them to be brand ambassadors on social media.
You can also ask to use their feedback as social proof. After all, they indicated they'd definitely recommend you to others. | https://docs.metrilo.com/en/articles/1362368-how-to-handle-customer-feedback | 2021-05-06T12:26:50 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.metrilo.com |
UIElement.
Pointer Captures Property
Definition
Gets the set of all captured pointers, represented as Pointer values.
Equivalent WinUI property: Microsoft.UI.Xaml.UIElement.PointerCaptures.
public: property IVectorView<Pointer ^> ^ PointerCaptures { IVectorView<Pointer ^> ^ get(); };
IVectorView<Pointer> PointerCaptures();
public IReadOnlyList<Pointer> PointerCaptures { get; }
var iVectorView = uIElement.pointerCaptures;
Public ReadOnly Property PointerCaptures As IReadOnlyList(Of Pointer)
Property. | https://docs.microsoft.com/it-it/uwp/api/windows.ui.xaml.uielement.pointercaptures?view=winrt-19041 | 2021-05-06T12:22:29 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.microsoft.com |
GetClientCertificateSignRequest
Contributors
Download PDF of this page
You can use the
GetClientCertificateSignRequest method to generate a certificate signing request that can be signed by a certificate authority to generate a client certificate for the cluster. Signed certificates are needed to establish a trust relationship for interacting with external services.
Parameters
This method has no input parameters.
Return values
This method has the following return values:
Request example
Requests for this method are similar to the following example:
{ "method": "GetClientCertificateSignRequest", "params": { }, "id": 1 }
Response example
This method returns a response similar to the following example:
{ "id": 1, "result": { "clientCertificateSignRequest": "MIIByjCCATMCAQAwgYkxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybm..." } }
New since version
11.7 | https://docs.netapp.com/us-en/element-software/api/reference_element_api_getclientcertificatesignrequest.html | 2021-05-06T13:34:16 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.netapp.com |
The
<area> defines an area inside an image map that has predefined clickable areas. An image map allows geometric areas on an image to be associated with hypertext link.
This element is used only within a
<map> element.
This element's attributes include the global attributes.
alt
hrefattribute is used.
coords
coordsattribute details the coordinates of the
shapeattribute in size, shape, and placement of an
<area>.
rect: the value is
x1,y1,x2,y2. Value specifies the coordinates of the top-left and bottom-right corner of the rectangle.
<area shape="rect" coords="0,0,253,27" href="#" target="_blank" alt="Mozilla">The coords in the above example specify: 0,0 as the top-left corner and 253,27 as the bottom-right corner of the rectangle.
circle: the value is
x,y,radius. Value specifies the coordinates of the circle center and the radius.
<area shape="circle" coords="130,136,60" href="#" target="_blank" alt="MDN">
poly: the value is
x1,y1,x2,y2,..,xn,yn. Value specifies the coordinates of the edges of the polygon. If the first and last coordinate pairs are not the same, the browser will add the last coordinate pair to close the polygon
default: defines the entire region
download
<a>for a full description of the
downloadattribute.
href
<area>element does not represent a hyperlink.
hreflang
hrefattribute is present.
ping
POSTrequests with the body
PINGwill be sent by the browser (in the background). Typically used for tracking.
hrefattribute,attribute is present.
shape
rect, which defines a rectangular region;
circle, which defines a circular region;
poly, which defines a polygon; and
default, which indicates the entire region beyond any defined shapes.
circ,
polygon, and
rectangleas valid values for
shape, but these values are non-standard.
target
.
hrefattribute is present.
Note: In newer browser versions (e.g. Firefox 79+) setting
target="_blank" on
<area> elements implicitly provides the same
rel behavior as setting
rel="noopener".
name
nohref
Note: Since HTML5, omitting the
href attribute is sufficient.
tabindex
type
<map name="primary"> <area shape="circle" coords="75,75,75" href="left.html" alt="Click to go Left"> <area shape="circle" coords="275,75,75" href="right.html" alt="Click to go Right"> </map> <img usemap="#primary" src="" alt="350 x 150 pic">
© 2005–2020 Mozilla and individual contributors.
Licensed under the Creative Commons Attribution-ShareAlike License v2.5 or later. | https://docs.w3cub.com/html/element/area | 2021-05-06T12:44:51 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.w3cub.com |
How can I display only posts with a specific hashtag?
You can do this in one of two ways, depending on what you're looking for.
Option 1: Display your own Instagram posts and then use Hashtag Filtering to automatically show or hide only the posts that use a certain hashtag.
Option 2: Create a Hashtag Feed that shows posts from all across Instagram that use a specific hashtag. | https://docs.spotlightwp.com/article/781-how-can-i-display-only-posts-with-a-specific-hashtag | 2021-05-06T13:33:18 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.spotlightwp.com |
mass - molecular masses and isotope distributions¶
Summary¶
This module defines general functions for mass and isotope abundance
calculations. For most of the functions, the user can define a given
substance in various formats, but all of them would be reduced to the
Composition object describing its
chemical composition.
Classes¶
Composition- a class storing chemical composition of a substance.
Unimod- a class representing a Python interface to the Unimod database (see
pyteomics.mass.unimodfor a much more powerful alternative).
Mass calculations¶
calculate_mass()- a general routine for mass / m/z calculation. Can calculate mass for a polypeptide sequence, chemical formula or elemental composition. Supplied with an ion type and charge, the function would calculate m/z.
fast_mass()- a less powerful but much faster function for polypeptide mass calculation.
fast_mass2()- a version of fast_mass that supports modX notation.
Isotopic abundances¶
isotopic_composition_abundance()- calculate the relative abundance of a given isotopic composition.
most_probable_isotopic_composition()- finds the most abundant isotopic composition for a molecule defined by a polypeptide sequence, chemical formula or elemental composition.
isotopologues()- iterate over possible isotopic conposition of a molecule, possibly filtered by abundance.
Data¶
nist_mass- a dict with exact masses of the most abundant isotopes.
std_aa_comp- a dict with the elemental compositions of the standard twenty amino acid residues, selenocysteine and pyrrolysine.
std_ion_comp- a dict with the relative elemental compositions of the standard peptide fragment ions.
std_aa_mass- a dict with the monoisotopic masses of the standard twenty amino acid residues, selenocysteine and pyrrolysine.
Composition.
__init__(*args, **kwargs)[source]¶
A Composition object stores a chemical composition of a substance. Basically it is a dict object, in which keys are the names of chemical elements and values contain integer numbers of corresponding atoms in a substance.
The main improvement over dict is that Composition objects allow addition and subtraction.
A Composition object can be initialized with one of the following arguments: formula, sequence, parsed_sequence or split_sequence.
If none of these are specified, the constructor will look at the first positional argument and try to build the object from it. Without positional arguments, a Composition will be constructed directly from keyword arguments.
If there’s an ambiguity, i.e. the argument is both a valid sequence and a formula (such as ‘HCN’), it will be treated as a sequence. You need to provide the ‘formula’ keyword to override this.
Warning
Be careful when supplying a list with a parsed sequence or a split sequence as a keyword argument. It must be obtained with enabled show_unmodified_termini option. When supplying it as a positional argument, the option doesn’t matter, because the positional argument is always converted to a sequence prior to any processing.
- class
pyteomics.mass.mass.
Unimod(source='')[source]¶
A class for Unimod database of modifications. The list of all modifications can be retrieved via mods attribute. Methods for convenient searching are by_title and by_name. For more elaborate filtering, iterate manually over the list.
Note
See
pyteomics.mass.unimodfor a new alternative class with more features.
__init__(source='')[source]¶
Create a database and fill it from XML file retrieved from source.
by_id(i)[source]¶
Search modifications by record ID. If a modification is found, it is returned. Otherwise,
KeyErroris raised.
by_name(name, strict=True)[source]¶
Search modifications by name. If a single modification is found, it is returned. Otherwise, a list will be returned.
by_title(title, strict=True)[source]¶
Search modifications by title. If a single modification is found, it is returned. Otherwise, a list will be returned.
pyteomics.mass.mass.
calculate_mass(*args, **kwargs)[source]¶
Calculates the monoisotopic mass of a polypeptide defined by a sequence string, parsed sequence, chemical formula or Composition object.
One or none of the following keyword arguments is required: formula, sequence, parsed_sequence, split_sequence or composition. All arguments given are used to create a
Compositionobject, unless an existing one is passed as a keyword argument.
Note that if a sequence string is supplied and terminal groups are not explicitly shown, then the mass is calculated for a polypeptide with standard terminal groups (NH2- and -OH).
Warning
Be careful when supplying a list with a parsed sequence. It must be obtained with enabled show_unmodified_termini option.
pyteomics.mass.mass.
fast_mass(sequence, ion_type=None, charge=None, **kwargs)[source]¶
Calculate monoisotopic mass of an ion using the fast algorithm. May be used only if amino acid residues are presented in one-letter code.
pyteomics.mass.mass.
fast_mass2(sequence, ion_type=None, charge=None, **kwargs)[source]¶
Calculate monoisotopic mass of an ion using the fast algorithm. modX notation is fully supported.
pyteomics.mass.mass.
isotopic_composition_abundance(*args, **kwargs)[source]¶
Calculate the relative abundance of a given isotopic composition of a molecule.
pyteomics.mass.mass.
isotopologues(*args, **kwargs)[source]¶
Iterate over possible isotopic states of a molecule. The molecule can be defined by formula, sequence, parsed sequence, or composition. The space of possible isotopic compositions is restrained by parameters
elements_with_isotopes,
isotope_threshold,
overall_threshold.
pyteomics.mass.mass.
most_probable_isotopic_composition(*args, **kwargs)[source]¶
Calculate the most probable isotopic composition of a peptide molecule/ion defined by a sequence string, parsed sequence, chemical formula or
Compositionobject.
Note that if a sequence string without terminal groups is supplied then the isotopic composition is calculated for a polypeptide with standard terminal groups (H- and -OH).
For each element, only two most abundant isotopes are considered.
pyteomics.mass.mass.
nist_mass¶
// . There are entries for each element containing the masses and relative abundances of several abundant isotopes and a separate entry for undefined isotope with zero key, mass of the most abundant isotope and 1.0 abundance.
pyteomics.mass.mass.
std_aa_comp¶
A dictionary with elemental compositions of the twenty standard amino acid residues, selenocysteine, pyrrolysine, and standard H- and -OH terminal groups.
pyteomics.mass.mass.
std_aa_mass¶
A dictionary with monoisotopic masses of the twenty standard amino acid residues, selenocysteine and pyrrolysine.
pyteomics.mass.mass.
std_ion_comp¶
A dict with relative elemental compositions of the standard peptide fragment ions. An elemental composition of a fragment ion is calculated as a difference between the total elemental composition of an ion and the sum of elemental compositions of its constituting amino acid residues. | https://pyteomics.readthedocs.io/en/latest/api/mass.html | 2021-05-06T12:31:55 | CC-MAIN-2021-21 | 1620243988753.97 | [] | pyteomics.readthedocs.io |
Extending / Bundled Extensions / Activation
Note: You are currently reading the documentation for Bolt 3.6. Looking for the documentation for Bolt 4.0 instead?
Once your Bundle can be autoloaded, then one more step is needed to enable Bolt to load them.
To activate a Bundle, you need to add an
extensions key to either your
.bolt.yml or
.bolt.php file in the root of your project, with the values
being the Bundles you want Bolt to load. If you don't have one of these files
already then starting by creating an empty
.bolt.yml is the easiest way to
Updating
.bolt.yml or
.bolt.php¶
An example using
.bolt.yml:
extensions: - BundleBaseNamespace\MyBundleExtension
To clarify, the value you put in the yml file is exactly what you would use to
instantiate the class, so in code the above is equivalent to
new \BundleBaseNamespace\MyBundleExtension()
.bolt.php allows for two different methods of loading, via strings:
<?php return [ 'extensions' => [ 'BundleBaseNamespace\MyBundleExtension' ] ];
or via class instances:
<?php use BundleBaseNamespace\MyBundleExtension; return [ 'extensions' => [ new MyBundleExtension() ] ];
Couldn't find what you were looking for? We are happy to help you in the forum, on Slack or on Github. | https://docs.bolt.cm/3.6/extensions/bundled/activation | 2021-05-06T12:42:26 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.bolt.cm |
A newer version of this page is available. Switch to the current version.
TextCustomizingEventHandler Delegate
Represents a method that will handle the TextInfoControlBase.CustomizeText event.
Namespace: DevExpress.XtraScheduler.Reporting
Assembly: DevExpress.XtraScheduler.v19.2.Reporting.dll
Declaration
public delegate void TextCustomizingEventHandler( object sender, TextCustomizingEventArgs e );
Public Delegate Sub TextCustomizingEventHandler( sender As Object, e As TextCustomizingEventArgs )
Parameters
Remarks
When creating an TextCustom | https://docs.devexpress.com/WindowsForms/DevExpress.XtraScheduler.Reporting.TextCustomizingEventHandler?v=19.2 | 2021-05-06T12:08:22 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.devexpress.com |
Working Directly With License Manager Configuration Files
License Manager configuration files can be modified using:
>Sentinel Admin API (for all types of License Managers)
>Sentinel Admin Control Center (for Admin API License Managers)
However, under certain circumstance it may be desirable or necessary to work directly with the configuration files using a text editor, as described in this section.
License Manager configuration files do not exist on a given machine until one or more of the following occur:
>A user submits configuration changes in Admin Control Center.
>The writeconfig command is issued in Admin API to write configuration changes to the configuration file.
>A configuration file is created manually and placed on the machine.
Default Location of License Manager Configuration Files
This topic describes the location where each type of License Manager creates or expects to find its configuration file.
For all types of License Managers, this location can be determined in Admin API by retrieving the value of the configuration parameter <configdir>.
Admin License Manager
For the Admin LM on a given machine, the configuration file is called hasplm.ini. The pathname of the configuration file is as follows:
>For Windows x64: %CommonProgramFiles(x86)%\Aladdin Shared\HASP\hasplm.ini
>For Windows x86: %CommonProgramFiles%\Aladdin Shared\HASP\hasplm.ini
>For Linux/Mac: /etc/hasplm/hasplm.ini
The full path name of the hasplm.ini file is displayed in the Diagnostics report in Admin Control Center (see the INI File entry).
On a given machine, one hasplm.ini file exists for all software vendors who require the Admin LM on the machine.
NOTE If you are using Windows in a language other than English, locate the directory in which the common files are stored. (In English Windows, the Common Files folder).
Integrated/External License Manager
For the Integrated LM or External LM, the configuration file is called hasp_vendorId.ini. (vendorId is the Vendor ID associated with your Batch Code.) For each account under which a protected application executes on a given machine, the file is placed in one of the following locations:
The Integrated/External LM also searches for configuration information from additional sources, in the following order:
1.(Windows only) The License Manager search for the hasp_vendorId.ini configuration file in the following locations:
a.directory where the protected application is installed.
b.the %ProgramData%\Safenet Sentinel\Sentinel LDK\ directory (for applications that were protected with Sentinel LDK 7.6 or later).
This file must be created and maintained manually.
If the hasp_vendorId.ini file is present in more than one of the locations described in this section, the License Manager merges the information in the files. Preference for conflicting information is given to files according to the following priority:
a.default location
b.application directory
c.the %ProgramData%\Safenet Sentinel\Sentinel LDK\ directory
For example: If files are present in the default location and in the application directory, and both files contain a list of remote license server machines, the License Manager will search first for licenses in the list from the file in the default location. If the two files contain conflicting configuration information, preference is given to information from the file in the default location.
2.If the Sentinel LDK License Manager service on the local machine is active and “broadcastsearch” is enabled for the Integrated/External LM, the Integrated/External LM additionally uses the list of remote license server machines from the Admin LM.
NOTE An application that is linked with the Borland C static library (that is, libhasp_windows_bcc_<vendorID>.lib) does not access the Integrated/External License Manager configuration file. As a result, only default settings are used by the License Manager in this instance.
Modifying License Manager Configuration Files Manually
You have the option of creating a configuration file manually. This would be typically done when:
> You want to distribute the same configuration parameters to many machines.
>You want to place a configuration file in the application directory. A configuration file in this location would be shared by all users who run a protected application on a given machine.
The easiest way to create a configuration file is to copy an existing file that was created using one of the License Manager tools and modify it to suit your requirements.
The configuration file does not have to contain any parameters for which you accept the default values. A typical reason to create a configuration file manually is to specify a remote license server machine. In this case, the file would contain the following entry:
serveraddr = remoteServerAddress
This parameter is described below in greater detail.
For multiple entries, place each entry on a separate line in the file.
Additional License Manager Configuration Files Parameters
The table that follows describes configuration parameters that you can insert or modify in the configuration file for any type of License Manager (unless noted otherwise).
For example:
disable_IPv6 = 1
requestlog = 0
errorlog = 1
getinfo_uncached = 0
serveraddr = 10.1.1.17
serveraddr = 10.1.1.255
Additional License Manager Files
The table that follows describes additional directories and files that are created by the Admin License Manager under Windows.
The directories and files can be found under the following path:
%CommonProgramFiles(x86)%\Aladdin Shared\HASP
(For Windows x86, under %CommonProgramFiles%\Aladdin Shared\HASP)
The files in the attached, detached, rehosted, and cancelled directories are used for logging purposes only. The user can delete these files if necessary.
The ID files in the lmid directory can be deleted. However, if the user later want to detach a license to one of the machines, they will have to manually create the ID file for the recipient machine (using Admin Control Center) and place the file in the lmid directory on the license server machine. | https://docs.sentinel.thalesgroup.com/ldk/LDKdocs/SPNL/LDK_SLnP_Guide/Distributing/License_Manager/070-Working_directly_with_Config_Files.htm | 2021-05-06T12:55:20 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.sentinel.thalesgroup.com |
Alert mechanism
1. Overview
Prisma Cloud lets you surface critical policy breaches by sending alerts to any number of channels. Alerts ensure that significant events are put in front of the right audience at the right time.
Alerts are built on the following constructs:
- Alert profile
Specifies which events should be sent to which channel. You can create any number of alert profiles, where each profile gives you granular control over which audience should receive which notifications.
- Alert channel
Messaging medium over which alerts are sent. Prisma Cloud supports email, JIRA, Slack, PagerDuty, and others.
- Alert trigger
Events that require further scrutiny. Alerts are raised when the rules that make up your policy are violated. When something in your environment violates a rule, an audit is logged, and an alert is sent to any matching alert profile (channel, audience). Prisma Cloud can be configured to notify the appropriate party when an entire policy, or even a specific rule, is violated.
You can also set up alerts for Defender health events. These events tell you when Defender unexpectedly disconnects from Console. Alerts are sent when a Defender has been disconnected for more than 6 hours.
Not all triggers are available for all channels. For example, new JIRA issues can only be opened when vulnerability rules are triggered.
2. Triggers
Most alerts trigger on a policy violation. When policy is the trigger, you can optionally choose to trigger on specific rules rather than the entire policy. Vulnerability, compliance, and cloud discovery alerts work differently.
2.1. Vulnerability alerts
The number of known vulnerabilities in a resource is not static over time. As the Prisma Cloud Intelligence Stream is updated with new data, new vulnerabilities might be uncovered in resources that were previously considered clean. The first time a resource (image, container, host, etc) enters the environment, Prisma Cloud assesses it for vulnerabilities. If a vulnerability violates a rule in the policy, and the rule has been configured to trigger an alert, an alert is dispatched. Thereafter, every resource is periodically rescanned. Additional alerts are dispatched only when new vulnerabilities that match your alert profile settings are detected. With vulnerability alerts, you get one, and only one, alert for each vulnerability detected (aggregated by scan).
2.2. Compliance alerts
Alerts for compliance issues work a little differently. The resources in your system are either compliant or non-compliant. When your system is non-compliant, Prisma Cloud sends an alert. As long as there are non-compliant resources, Prisma Cloud sends an alert every scan interval (default 24 hours). Compliance alerts list each failed check, and the number of resources that failed the check in the latest scan and the previous scan. For detailed information about exactly which resources are non-compliant, use Compliance Explorer. The following screenshot shows an example compliance email alert:
For example:
Scan period 1: You have non-complaint container named crusty_pigeon. You’ll be alerted about the container compliance issues.
Scan period 2: Container crusty_pigeon is still running. It’s still non-compliant. You’ll be alerted about the same container compliance issues.
2.3. Cloud discovery alerts
Cloud discovery alerts warn you when new cloud native resources are discovered in your environment so that you can inspect and secure them with Prisma Cloud. Cloud discovery alerts are available on the email channel only. For each new resource discovered in a scan, Prisma Cloud lists the cloud provider, region, project, service type (i.e. AWS Lambda, Azure AKS) and resoure name (my-aks-cluster). | https://docs.twistlock.com/docs/compute_edition/alerts/alert_mechanism.html | 2021-05-06T13:16:39 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.twistlock.com |
40% Off Above Ground Deck Pools swimming pool discounters. Discounted Above Ground Deck Pools Swimming Pool Discounters has a wide varitey of new above ground deck swimming pools. Save 40% off on your next above ground. Tell Us What …
19 Mar 2014 . These raised decks are made for aboveground pools with 52″ pool walls. They also make getting in and out of the pool much easier than .
Search: Sitemap Contact Us Resources Pools 101 Links Tracking Testimonials Categories PoolsOnly Homepage Above Ground Pools Accessories Air Blowers Alternative Sanitizers Auto Pool Cleaners Chemicals and Testers Chemical Feeders Cleaning Equipment Clearance and Bargains Commercial Products Covers and Reels
trying to keep things strht on our megamonstrous pool, deck, and patio remodel. We have a design. There job. That was a week ago. The new pool deck and sidewalk to the back door was poured
5 Jun 2020 . While above-ground pools boast a ton of positives (budget-friendly, easier to keep . See More of This Clever Pool Deck and Landscaping .
Building a Wooden Deck Over a Concrete One: So we& 39;ve had this very sturdy but . 50 Above Ground Pool Ideas of Pro & Cons, Budget ground pool deck ideas...
any type of codes connected to building a pool, deck or swimming pool home. Additionally, the neighborhood association, if relevant, might managed whatsoever times to avoid accidents. Magnificently developed pools as well as decks offer several hours of summertime enjoyment. Of course, there are bound to be some disadvantages to purchasing a swimming pool also. Besides the high cost of getting a
16 Jan 2020 . Deck ideas for above ground pools - Photo-Blog with pictures of . or more on a backyard oasis, and may look for a more affordable option.
two egories of liners are mostly used for above ground pools. Ideally, overlap liners cover are placed beyond the wear out. Therefore, it is advisable for a pool owner to keep a close eye on the above indi ors. This will help you replace them just
Pool Movie , Swimming Pool Kendrick Lamar , Intex Swimming Pool , Above Ground Swimming Pool Prices , Inground Swimming Pool Solar Attic Fans Save Ideas on Paintings How to Build a Greenhouse Cheap New Build Home or Older Property? Why Go Software About Contact Me Privacy Policy Sitem Pool Deck Ideas On a Budget the most common built deck is a wooden deck and its no surprise its Oberirdischer Pool Swimming Pool Decks Diy Pool Above Ground Swimming Pools In Ground Pools Pool Fun Diy In Ground Pool Building A Swimming Pool Lap Pools. More information
SECLUDED, WOOD BURNING FIRE PLACE, WIFI, WET BAR, POOL TABLE, GAS GRILL, OUTDOOR FIREPLACE and HOT TUB ON SCREENED PORCH STARTING AT $195 A NIGHT Above the River- 2BR/2BA- CABIN WITH TOCCOA RIVER ACCESS, SLEEPS 6, DECK ACCESS FROM EACH BEDROOM, GAZEBO OVERLOOKING THE WATER,
Over 20 Years of Experience To Give You Great Deals on Quality Home Products and More. Shop Items You Love at Overstock, with Free Shipping on Everything* and Easy Returns.
7 Mar 2020 . Above ground pool ideas to beautify a prefab swimming pool and give it a custom look. Ideas include above ground pool decks, modern .
department for any codes associated with developing a pool, deck or swimming pool home. Also, the neighborhood watch, if suitable, might any way times to avoid mishaps. Magnificently developed pools as well as decks supply lots of hours of summer season pleasure. Naturally, there are bound to be some disadvantages to buying a swimming pool as well. Besides the high cost of purchasing
a powerful but easily -managed Carbon Fiber rig above deck that is easy to handle and sail single- is a large nav table with swivel seat, Above deck, all running rigging stainless stays and shrouds run back forgiving boat. She might be alittle blocky looking above deck, due to STANDING HEADROOM, but is a fast
renovations, cl spaces, chimneys, porous tile, grout, patios, pool decks, koi ponds, etc. PENETRATING SEALERS - Concrete brick PAVERS or residential. Use on driveways, walkways, patios, floors, pool decks, basement floors, and pavers. Creates an attractive variegated for sealing driveways, autobody repair shops, patios, sidewalks, pool decks, outdoor block walls, and seawalls. Ion-Bond WPC -
Get the best deals on above ground pool decks when you shop the largest online selection at eBay.com. Free shipping on many items | Browse your favorite brands | affordable prices.
Angeles…the sun, the heat go figure , the cheap deals to be found because of the heat, the pools and don’t forget the SPIN THE BOTTLE WINE STUDIO CHEERS TO TOLUCA swimming pools are a popular, less-expensive alternative to . once they bought a round above-ground pool, complete with a pool deck (or perch). . Originally made to provide water for livestock, these inexpensive galvanized .
In some instances, an inexpensive above ground pool deck can cost $2,000. However, some decks can cost significantly more, depending on size and how .
Fast and Free Shipping On Many Items You Love On eBay. But Did You Check eBay? Check Out Top Brands On eBay. Sell on eBay Daily Deals Gift Cards Under $10 | https://laravel-docs.pl/news3/2863-cheap-above-pool-decks.html | 2021-05-06T12:49:12 | CC-MAIN-2021-21 | 1620243988753.97 | [] | laravel-docs.pl |
The MetaMask Wallet is a browser-based Ethereum wallet that will allow you to store ERC-20 cryptocurrency tokens, non-fungible tokens, and to interact with the GAME Credits Portal. While GAME Credits has no affiliation with MetaMask, to assist you in the installation of MetaMask so that you can use our Portal, we have included the below guide. For the most up to date information on MetaMask including how to install it, you can go to
Requirements
Supported web browser
Google Chrome
Mozilla Firefox
Brave
Microsoft Edge
Step 1: Begin The Installation On The MetaMask Website
Make sure the ‘Chrome’ tab is selected, then press ‘Install Metamask for Chrome’.
MetaMask isn’t a traditional program that you’d install on your desktop, it lives as an extension in the Chrome web browser. Here you are adding the MetaMask extension to your Chrome browser. Click ‘Add to Chrome’
Step 2: Set Up MetaMask
Once installed, begin the setup process by clicking ”Get Started”
You will be presented with two choices. You can either import an existing wallet using a 12 word seed phrase, or you can create another. In this case, we’re creating a new wallet. Press “Create Wallet.”
Choose your MetaMask usage data settings
.
Create a strong password of sufficient length, then read and agree to the terms of use. Click ‘Create’.
At this screen, make sure to backup your secret phrase! This is the only way to recover and import your wallet. NEVER share your secret phrase with anyone.
If you followed step 8 you should have your phrase saved somewhere. Put it in the correct order here and click continue.
Congrats, you’re all done!
You should be taken directly to your new wallet. You can also get to MetaMask anytime by clicking on the extension icon near your address bar. | https://docs.gamecredits.org/general/installing-metamask | 2021-05-06T11:44:20 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.gamecredits.org |
Brute force
1. Overview
A Brute Force incident surfaces a combination of audit events that indicate a protected resource is potentially being affected by an attempted DoS.
2. Investigation
In the following incident, you can see that a container received a flood of attempted actions to the extent that the Web Application and API Security (WAAS) blocked the source.
Review the WAAS audit logs to determine any further impact:
Additionally, review the logs of potentially affected applications to determine if there was any further impact. | https://docs.twistlock.com/docs/enterprise_edition/runtime_defense/incident_types/brute_force.html | 2021-05-06T12:37:47 | CC-MAIN-2021-21 | 1620243988753.97 | [] | docs.twistlock.com |
Launching an Instance Using the Launch Instance Wizard
Before you launch your instance, be sure that you are set up. For more information, see Setting Up with Amazon EC2.
Your AWS account might support both the EC2-Classic and EC2-VPC platforms, depending on when you created your account and which regions you've used. To find out which platform your account supports, see Supported Platforms. If your account supports EC2-Classic, you can launch an instance into either platform. If your account supports EC2-VPC only, you can launch an instance into a VPC only.
Important
When you launch an instance that's not within the AWS Free Tier, you are charged for the time that the instance is running, even if it remains idle.
Launching Your Instance from an AMI
When you launch an instance, you must select a configuration, known as an Amazon Machine Image (AMI). An AMI contains the information required to create a new instance. For example, an AMI might contain the software required to act as a web server: for example, Linux, Apache, and your website.
Tip
To ensure faster instance launches, break up large requests into smaller batches. For example, create five separate launch requests for 100 instances each instead of one launch request for 500 instances.
To launch an instance
Open the Amazon EC2 console at.
In the navigation bar at the top of the screen, the current region is displayed. Select the region for the instance. This choice is important because some Amazon EC2 resources can be shared between regions, while others can't. Select the region that meets your needs. For more information, see Resource Locations.
From the Amazon EC2 console dashboard, choose Launch Instance.
On the Choose an Amazon Machine Image (AMI) page, choose an AMI as follows:
Select the type of AMI to use in the left pane:
- Quick Start
A selection of popular AMIs to help you get started quickly. To select an AMI that is eligible for the free tier, choose Free tier only in the left pane. These AMIs are marked Free tier eligible.
- My AMIs
The private AMIs that you own, or private AMIs that have been shared with you. To view AMIs shared with you, choose Shared with me in the left pane.
- AWS Marketplace
An online store where you can buy software that runs on AWS, including AMIs. For more information about launching an instance from the AWS Marketplace, see Launching an AWS Marketplace Instance.
- Community AMIs
The AMIs that AWS community members have made available for others to use. To filter the list of AMIs by operating system, choose the appropriate check box under Operating system. You can also filter by architecture and root device type.
Check the Root device type listed for each AMI. Notice which AMIs are the type that you need, either
ebs(backed by Amazon EBS) or
instance-store(backed by instance store). For more information, see Storage for the Root Device.
Check the Virtualization type listed for each AMI. Notice which AMIs are the type that you need, either
hvmor
paravirtual. For example, some instance types require HVM. For more information, see Linux AMI Virtualization Types.
Choose an AMI that meets your needs, and then choose Select.
On the Choose an Instance Type page, select the hardware configuration and size of the instance to launch. Larger instance types have more CPU and memory. For more information, see Instance Types.
To remain eligible for the free tier, choose the t2.micro instance type. For more information, see T2 Instances.
By default, the wizard displays current generation instance types, and selects the first available instance type based on the AMI that you selected. To view previous generation instance types, choose All generations from the filter list.
Note
To set up an instance quickly for testing purposes, choose Review and Launch to accept the default configuration settings, and launch your instance. Otherwise, to configure your instance further, choose Next: Configure Instance Details.
On the Configure Instance Details page, change the following settings as necessary (expand Advanced Details to see all the settings), and then choose Next: Add Storage:
Number of instances: Enter the number of instances to launch.
Note
To help ensure that you maintain the correct number of instances to handle your application, you can choose Launch into Auto Scaling Group to create a launch configuration and an Auto Scaling group. Auto Scaling scales the number of instances in the group according to your specifications. For more information, see the Amazon EC2 Auto Scaling User Guide.
Purchasing option: Select Request Spot instances to launch a Spot Instance. This adds and removes options from this page. Set your maximum price, and optionally update the request type, interruption behavior, and request validity. For more information, see Creating a Spot Instance Request.
Your account may support the EC2-Classic and EC2-VPC platforms, or EC2-VPC only. To find out which platform your account supports, see Supported Platforms. If your account supports EC2-VPC only, you can launch your instance into your default VPC or a nondefault VPC. Otherwise, you can launch your instance into EC2-Classic or a nondefault VPC.
Note
Some instance types must be launched into a VPC. If you don't have a VPC, you can let the wizard create one for you.
To launch into EC2-Classic:
Network: Select Launch into EC2-Classic.
Availability Zone: Select the Availability Zone to use. To let AWS choose an Availability Zone for you, select No preference.
To launch into a VPC:
Network: Select the VPC, or to create a new VPC, choose Create new VPC to go the Amazon VPC console. When you have finished, return to the wizard and choose Refresh to load your VPC in the list.
Subnet: Select the subnet into which to launch your instance. If your account is EC2-VPC only, select No preference to let AWS choose a default subnet in any Availability Zone. To create a new subnet, choose Create new subnet to go to the Amazon VPC console. When you are done, return to the wizard and choose Refresh to load your subnet in the list.
Auto-assign Public IP: Specify whether your instance receives a public IPv4 address. By default, instances in a default subnet receive a public IPv4 address and instances in a nondefault subnet do not. You can select Enable or Disable to override the subnet's default setting. For more information, see Public IPv4 Addresses and External DNS Hostnames.
Auto-assign IPv6 IP: Specify whether your instance receives an IPv6 address from the range of the subnet. Select Enable or Disable to override the subnet's default setting. This option is only available if you've associated an IPv6 CIDR block with your VPC and subnet. For more information, see Your VPC and Subnets in the Amazon VPC User Guide.
IAM role: Select an AWS Identity and Access Management (IAM) role to associate with the instance. For more information, see IAM Roles for Amazon EC2.
Shutdown behavior: Select whether the instance should stop or terminate when shut down. For more information, see Changing the Instance Initiated Shutdown Behavior.
Enable termination protection: To prevent accidental termination, select this check box. For more information, see Enabling Termination Protection for an Instance.
Monitoring: Select this check box to enable detailed monitoring of your instance using Amazon CloudWatch. Additional charges apply. For more information, see Monitoring Your Instances Using CloudWatch.
EBS-Optimized instance: An Amazon EBS-optimized instance uses an optimized configuration stack and provides additional, dedicated capacity for Amazon EBS I/O. If the instance type supports this feature, select this check box to enable it. Additional charges apply. For more information, see Amazon EBS–Optimized Instances.
Tenancy: If you are launching your instance into a VPC, you can choose to run your instance on isolated, dedicated hardware (Dedicated) or on a Dedicated Host (Dedicated host). Additional charges may apply. For more information, see Dedicated Instances and Dedicated Hosts.
T2 Unlimited: (Only valid for T2 instances) Select this check box to enable applications to burst beyond the baseline for as long as needed. Additional charges may apply. For more information, see T2 Instances.
Network interfaces: If you selected a specific subnet, you can specify up to two network interfaces for your instance:
For Network Interface, select New network interface to let AWS create a new interface, or select an existing, available network interface.
For Primary IP, enter a private IPv4 address from the range of your subnet, or leave Auto-assign to let AWS choose a private IPv4 address for you.
For Secondary IP addresses, choose Add IP to assign more than one private IPv4 address to the selected network interface.
(IPv6-only) For IPv6 IPs, choose Add IP, and enter an IPv6 address from the range of the subnet, or leave Auto-assign to let AWS choose one for you.
Choose Add Device to add a secondary network interface. A secondary network interface can reside in a different subnet of the VPC, provided it's in the same Availability Zone as your instance.
For more information, see Elastic Network Interfaces. If you specify more than one network interface, your instance cannot receive a public IPv4 address. Additionally, if you specify an existing network interface for eth0, you cannot override the subnet's public IPv4 setting using Auto-assign Public IP. For more information, see Assigning a Public IPv4 Address During Instance Launch.
Kernel ID: (Only valid for paravirtual (PV) AMIs) Select Use default unless you want to use a specific kernel.
RAM disk ID: (Only valid for paravirtual (PV) AMIs) Select Use default unless you want to use a specific RAM disk. If you have selected a kernel, you may need to select a specific RAM disk with the drivers to support it.
Placement group: A placement group determines the placement strategy of your instances. Select an existing placement group, or create a new one. This option is only available if you've selected an instance type that supports placement groups. For more information, see Placement Groups.
User data: You can specify user data to configure an instance during launch, or to run a configuration script. To attach a file, select the As file option and browse for the file to attach.
The AMI you selected includes one or more volumes of storage, including the root device volume. On the Add Storage page, you can specify additional volumes to attach to the instance by choosing Add New Volume. You can configure the following options for each volume:
Type: Select instance store or Amazon EBS volumes to associate with your instance. The type of volume available in the list depends on the instance type you've chosen. For more information, see Amazon EC2 Instance Store and Amazon EBS Volumes.
Device: Select from the list of available device names for the volume.
Snapshot: Enter the name or ID of the snapshot from which to restore a volume. You can also search for public snapshots by typing text into the Snapshot field. Snapshot descriptions are case-sensitive.
Size: For Amazon EBS-backed volumes, you can specify a storage size. Even if you have selected an AMI and instance that are eligible for the free tier, to stay within the free tier, you must keep under 30 GiB of total storage.
Note
Linux AMIs require GPT partition tables and GRUB 2 for boot volumes 2 TiB (2048 GiB) or larger. Many Linux AMIs today use the MBR partitioning scheme, which only supports up to 2047 GiB boot volumes. If your instance does not boot with a boot volume that is 2 TiB or larger, the AMI you are using may be limited to a 2047 GiB boot volume size. Non-boot volumes do not have this limitation on Linux instances.
Note
If you increase the size of your root volume at this point (or any other volume created from a snapshot), you need to extend the file system on that volume in order to use the extra space. For more information about extending your file system after your instance has launched, see Modifying the Size, IOPS, or Type of an EBS Volume on Linux.
Volume Type: For Amazon EBS volumes, select either a General Purpose SSD, Provisioned IOPS SSD, or Magnetic volume. For more information, see Amazon EBS Volume Types.
Note
If you select a Magnetic boot volume, you'll be prompted when you complete the wizard to make General Purpose SSD volumes the default boot volume for this instance and future console launches. (This preference persists in the browser session, and does not affect AMIs with Provisioned IOPS SSD boot volumes.) We recommended that you make General Purpose SSD volumes the default because they provide a much faster boot experience and they are the optimal volume type for most workloads.. a value in this menu to configure the encryption state of new Amazon EBS volumes. The default value is Not encrypted. Additional options include using your AWS managed customer master key (CMK) or a customer-managed CMK that you have created. Available keys are listed in the menu. You can also hover over the field and paste the Amazon Resource Name (ARN) of a key directly into the text box. For information about creating customer-managed CMKs, see AWS Key Management Service Developer Guide.
Note
Encrypted volumes may only be attached to supported instance types.
When done configuring your volumes, choose Next:.
On the Configure Security Group page, use a security group to define firewall rules for your instance. These rules specify which incoming network traffic is delivered to your instance. All other traffic is ignored. (For more information about security groups, see Amazon EC2 Security Groups for Linux Instances.) Select or create a security group as follows, and then choose Review and Launch.
To select an existing security group, choose Select an existing security group, and select your security group. If you are launching into EC2-Classic, the security groups are for EC2-Classic. If you are launching into a VPC, the security groups are for that VPC.
Note
(Optional) You can't edit the rules of an existing security group, but you can copy them to a new group by choosing Copy to new. Then you can add rules as described in the next step.
To create a new security group, choose Create a new security group. The wizard automatically defines the launch-wizard-x security group and creates an inbound rule to allow you to connect to your instance over SSH (port 22).
You can add rules to suit your needs. For example, if your instance is a web server, open ports 80 (HTTP) and 443 (HTTPS) to allow internet traffic.
To add a rule, choose Add Rule, select the protocol to open to network traffic, and then specify the source. Choose My IP from the Source list to let the wizard add your computer's public IP address. However, if you are connecting through an ISP or from behind your firewall without a static IP address, you need to find out the range of IP addresses used by client computers.
Warning
Rules that enable all IP addresses (
0.0.0.0/0) to access your instance over SSH or RDP are acceptable for this short exercise, but are unsafe for production environments. You should authorize only a specific IP address or range of addresses to access your instance.
On the Review Instance Launch page, check the details of your instance, and make any necessary changes by choosing the appropriate Edit link.
When you are ready, choose Launch.
In the Select an existing key pair or create a new key pair dialog box, you can choose an existing key pair, or create a new one. For example, choose Choose an existing key pair, then select the key pair you created when getting set up.
To launch your instance, select the acknowledgment check box, then choose Launch Instances.
Important
If you choose the Proceed without key pair option, you won't be able to connect to the instance unless you choose an AMI that is configured to allow users another way to log in.
(Optional) You can create a status check alarm for the instance (additional fees may apply). (If you're not sure, you can always add one later.) On the confirmation screen, choose Create status check alarms and follow the directions. For more information, see Creating and Editing Status Check Alarms.
If the instance fails to launch or the state immediately goes to
terminatedinstead of
running, see Troubleshooting Instance Launch Issues. | https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html | 2018-08-14T08:59:53 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.aws.amazon.com |
three arguments specifying the worker’s own ID as well as the receptionist’s IP and port. It uses TCP and connects(); }(). | https://docs.improbable.io/reference/13.1/csharpsdk/using/connecting | 2018-08-14T08:56:41 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.improbable.io |
A class to represent a line. More...
#include "descriptor.hpp"
A class to represent a line.
As aformentioned, it is been necessary to design a class that fully stores the information needed to characterize completely a line and plot it on image it was extracted from, when required.
KeyLine* class has been created for such goal; it is mainly inspired to Feature2d's KeyPoint class, since KeyLine shares some of KeyPoint's fields, even if a part of them assumes a different meaning, when speaking about lines. In particular:
Apart from fields inspired to KeyPoint class, KeyLines stores information about extremes of line in original image and in octave it was extracted from, about line's length and number of pixels it covers.
constructor
Returns the end point of the line in the original image
Returns the end point of the line in the octave it was extracted from
Returns the start point of the line in the original image
Returns the start point of the line in the octave it was extracted from
orientation of the line
object ID, that can be used to cluster keylines by the line they represent
the length of line
number of pixels covered by the line
octave (pyramid layer), from which the keyline has been extracted
coordinates of the middlepoint
the response, by which the strongest keylines have been selected. It's represented by the ratio between line's length and maximum between image's width and height
minimum area containing line
line's extremes in image it was extracted from
lines's extremes in original image | https://docs.opencv.org/3.2.0/d1/dd7/structcv_1_1line__descriptor_1_1KeyLine.html | 2018-08-14T08:38:22 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.opencv.org |
Transfer CFT 3.2.2 Users Guide Add entire publication Select collection Cancel Create a New Collection Set as my default collection Cancel This topic and sub-topics have been added to MyDocs. Ok No Collection has been selected. --> modout CFTNET Initiator mode only [MODOUT = {string1...32}] Hayes modem initialization string for outgoing connections. Return to Command index Related Links | https://docs.axway.com/bundle/Transfer_CFT_322_UsersGuide_allOS_en_HTML5/page/Content/CFTUTIL/Parameter_index/modout.htm | 2018-08-14T09:11:42 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.axway.com |
Hyper-V Technology Overview
Applies To: Windows Server 2016, Microsoft Hyper-V Server 2016). Use a centralized, grouped by what the features provide or help you do..
For a summary of the features introduced in this version, see What's new in Hyper-V on Windows Server 2016. 2016.. | https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/hyper-v-technology-overview | 2018-08-14T09:05:07 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.microsoft.com |
class Proc
Running process (filehandle-based interface)
Proc is a representation of an invocation of an external process. It provides access to the input, output and error stream as well as the exit code. It is typically created through the
run subroutine:
my = run 'echo', 'Hallo world', :out;my = .out.slurp: :close;say "Output was $captured-output.perl()"; # OUTPUT: «Output was "Hallo world\n"»
Piping several commands is easy too. To achieve the equivalent of the pipe
echo "Hello, world" | cat -n in Perl 6, and capture the output from the second command, you can do
my = run 'echo', 'Hello, world', :out;my = run 'cat', '-n', :in(.out), :out;say .out.get;
You can also feed the
:in pipe directly from your program, by setting it to
True, which will make the pipe available via
.in method on the
Proc:
my = run "cat", "-n", :in, :out;.in.say: "Hello,\nworld!";.in.close;say .out.slurp: :close;# OUTPUT: «1 Hello,# 2 world!»
In order to capture the standard error
:err can be supplied:
my = run "ls", "-l", ".", "qqrq", :out, :err;my = .out.slurp: :close;my = .err.slurp: :close;my = .exitcode;
Note: Versions of Rakudo older than 2017.04 do not have
.slurp available on IO::Pipe objects; use
.slurp-rest instead.
Use Proc::Async for non-blocking operations.
Methods
method new
method new(Proc:: = '-',: = '-',: = '-',Bool : = False,Bool : = True,Bool : = False,Str : = 'UTF-8',Str : = "\n",--> Proc)sub run(* ($, *@),: = '-',: = '-',: = '-',Bool : = False,Bool : = True,Bool : = False,Str : = 'UTF-8',Str : = "\n",: = ,Hash() : =--> Proc)sub shell(,: = '-',: = '-',: = '-',Bool : = False,Bool : = True,Bool : = False,Str : = 'UTF-8',Str : = "\n",: = ,Hash() : =--> Proc)
new creates a new
Proc object, whereas
run or
shell create one and spawn it with the command and arguments provided in
@args or
$cmd, respectively.
$in,
$out and
$err are the three standard streams of the to-be-launched program, and default to
"-" meaning they inherit the stream from the parent process. Setting one (or more) of them to
True makes the stream available as an IO::Pipe object of the same name, like for example
$proc.out. You can set them to
False to discard them. Or you can pass an existing IO::Handle object (for example
IO::Pipe) in, in which case this handle is used for the stream.
Please bear in mind that the process streams reside in process variables, not in the dynamic variables that make them available to our programs. Thus, modifying the dynamic filehandle variables (such as
$*OUT) inside the host process will have no effect in the spawned process, unlike
$*CWD and
$*ENV, whose changes will be actually reflected in it.
my = "/tmp/program.p6";my =spurt , ;.put: "1. standard output before doing anything weird";.put: "3. everything should be back to normal";# OUTPUT# 1. standard output before doing anything weird# /tmp/program.p6: This goes to standard output# 3. everything should be back to normal# /tmp/out.txt will contain:# 2. temp redefine standard output before this message
This program shows that the program spawned with
shell is not using the temporary
$*OUT value defined in the host process (redirected to
/tmp/out.txt), but the initial
STDOUT defined in the process.
$bin controls whether the streams are handled as binary (i.e. Blob object) or text (i.e. Str objects). If
$bin is False,
$enc holds the character encoding to encode strings sent to the input stream and decode binary data from the output and error streams.
With
$chomp set to
True, newlines are stripped from the output and err streams when reading with
lines or
get.
$nl controls what your idea of a newline is.
If
$merge is set to True, the standard output and error stream end up merged in
$proc.out.
method sink
method sink(--> Nil)
When sunk, the
Proc object will throw X::Proc::Unsuccessful if the process it ran exited unsuccessfully.
method spawn
method spawn(* ($, *@), : = , Hash() : = --> Bool)
Runs the
Proc object with the given command, argument list, working directory, and environment.
method shell
method shell(, : = , : --> Bool)
Runs the
Proc object with the given command and environment which are passed through to the shell for parsing and execution. See
IO::shell for an explanation of which shells are used by default in the most common operating systems.
method command
method command(Proc: --> List)
The command method is an accessor to a list containing the arguments that were passed when the Proc object was executed via
spawn or
shell or
run.
method pid
method pid(Proc:)
Returns the
$*PID value of the process if available, or
Nil.
method exitcode
method exitcode(Proc: --> Int)
Returns the exit code of the external process, or -1 if it has not exited yet.
method signal
method signal(Proc:)
Returns the signal number with which the external process was killed, or 0 or an undefined value otherwise.
Type Graph
Proc
Stand-alone image: vector | https://docs.perl6.org/type/Proc | 2018-08-14T08:28:31 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.perl6.org |
Introduction to Liquid
Updated on 20-October-2016 at 10:16 AM
What is Liquid?
If you aren't familiar with the concept of a template language, it's sometimes described as a bridge between the data in your site and the HTML templates sent to the browser. Liquid allows the site developer to access information in BCs' database more granulary.
By using some simple to read and easy to remember constructs we are able to access data from our site (e.g. a product title, a collection description, a set of product images or a blog post) and output that data directly into our pages. One of the main benefits is that we don't need to have any knowledge of what that data is - rather we just need to know which variables we have access to in each template.
The Liquid engine knows how to interpret the legacy BC tags and modules, this means you can use Liquid logic side by side with the legacy tags and modules.
In simple terms using Liquid you can display information stored in BusinessCatalyst in very specific ways (filter something, show if a criteria is met and so on). Liquid is not for "writing" information, meaning one cannot edit a customer's name using Liquid or edit a product's price.
Why use Liquid?
Here are a few reasons why you might consider using Liquid code on your sites.
Custom logic
This was not possible before implementing Liquid and one had to resort to time consuming Javascript implementations to do custom rendering, basically dump a large amount of data and sort or filter it with jQuery was usually the only way.
The ability to run Liquid logic on the server is the closest thing to server side programming we have implemented on BC. The code runs on the server before the rendering engine processes the data and displays it on the front-end of the site.
Note: See the logic tags article for a list of supported tags you can apply with Liquid.
Better controlLiquid output tags enable you more granular access to site parameters like the site domain's culture and other layout and site related parameters.
If you are moving from another system to BC, you don't need to change all your styles to be in line with what the tags are rendering, you can now control what data you output to what HTML tag and in what way.
You can control how the tags are used and rendered - the traditional BC tag functionality is replaced by Liquid tags, for example on the Product Small layout the
{tag_name} becomes
{{name}}.
Note: Check out Liquid globals article for a list of objects available on every page by default.
Tag manipulation
You can manipulate the Liquid tags using filters. For example the product's name outputted by
{{name}} can be outputted directly to uppercase
using the Liquid filter "upcase" like so:
{{name | upcase}}.
Note: See the filters article for a list of filters you can apply to Liquid tags.
Downsides
The InContext Editor will not work on pages with Liquid tags.
At the moment Liquid or module_data do not work in email campaigns, take a look at the BC.Next and email campaigns technote for more details.
Using Liquid with other templating engines that use double curly braces can result in conflicts. You would need to make sure you escape the markup with the raw Liquid tag.
There are still some areas where you need to emulate some items with a little extra custom work. Nevertheless, manually rebuilding them involves just replicating their structure as you see it exposed by the legacy tag in the front-end.
How to enable Liquid?
To enable the new rendering engine on your site go to the Site settings > Beta features section:
Confirm Liquid engine is enabled
To confirm a particular site is being rendered using the new Liquid engine you can perform this quick test:
- Create a new blank page
- Insert this piece of code
{{ this | json }}, it will list all the Liquid tags you can use in a JSON format
- If Liquid is running, the tag will be interpreted and you will get the code below:
Getting started
Let's take a closer look at the basics of Liquid and how to get started with using it in your projects. | https://docs.worldsecuresystems.com/developers/liquid/introduction-to-liquid | 2018-08-14T09:10:19 | CC-MAIN-2018-34 | 1534221208750.9 | [array(['https://screencasteu.worldsecuresystems.com/Mihai/2015-09-02_10.47.53.png',
None], dtype=object)
array(['https://screencasteu.worldsecuresystems.com/Mihai/2015-09-30_16.11.17.png',
None], dtype=object)
array(['https://screencasteu.worldsecuresystems.com/Mihai/2015-09-30_16.13.43.png',
None], dtype=object) ] | docs.worldsecuresystems.com |
Win32_OperatingSystem class
The Win32_OperatingSystem WMI class represents a Windows-based operating system installed on a computer.
The following syntax is simplified from Managed Object Format (MOF) code and includes all of the inherited properties. Properties and methods are in alphabetic order, not MOF order.
Syntax
[Singleton, Dynamic, Provider("CIMWin32"), SupportsUpdate, UUID("{8502C4DE-5FBB-11D2-AAC1-006008C78BC7}"), AMENDMENT] = 2; PortableOperatingSystem;; uint8 QuantumLength; uint8 QuantumType; };
The Win32_OperatingSystem class has these types of members:
Methods
The Win32_OperatingSystem class has these methods.
Properties
The Win32_OperatingSystem class has these properties.
BootDevice
Data type: string
Access type: Read-only
Qualifiers: MappingStrings ("Win32API|DRIVE_MAP_INFO|btInt13Unit")
Name of the disk drive from which the Windows operating system starts.
Example: "\\Device\Harddisk0"
BuildNumber
Data type: string
Access type: Read-only
Qualifiers: MappingStrings ("Win32API|System Information Structures|OSVERSIONINFOEX|dwBuildNumber")
Build number of an operating system. It can be used for more precise version information than product release version numbers.
Example: "1381"
BuildType
Data type: string
Access type: Read-only
Qualifiers: MappingStrings ("Win32Registry|Software\\Microsoft\\Windows\\CurrentVersion|CurrentType")
Type of build used for an operating system.
Examples: ""retail build"", ""checked build""
Caption
Data type: string
Access type: Read-only
Qualifiers: MaxLen (64), DisplayName ("Caption")
Short description of the object—a one-line string. The string includes the operating system version. For example, "Microsoft Windows 7 Enterprise ". This property can be localized.
Windows Vista and Windows 7: This property may contain trailing characters. For example, the string "Microsoft Windows 7 Enterprise " (trailing space included) may be necessary to retrieve information using this property.
This property is inherited from CIM_ManagedSystemElement.
CodeSet
Data type: string
Access type: Read-only
Qualifiers: MaxLen (6), MappingStrings ("Win32API|National Language Support Functions|GetLocaleInfo|LOCALE_IDEFAULTANSICODEPAGE")"
CountryCode
Data type: string
Access type: Read-only
Qualifiers: MappingStrings ("Win32API|National Language Support Functions|GetLocaleInfo|LOCALE_ICOUNTRY")
Code for the country/region that an operating system uses. Values are based on international phone dialing prefixes—also referred to as IBM country/region codes. This property can use a maximum of six characters to define the country/region code value.
Example: "1" (United States)
CreationClassName
Name of the first concrete class that appears in the inheritance chain used in the creation of an instance. When used with other key properties of the class, this property allows all instances of this class and its subclasses to be identified uniquely.
This property is inherited from CIM_OperatingSystem.
CSCreationClassName
Data type: string
Access type: Read-only
Qualifiers: Propagated ("CIM_ComputerSystem.CreationClassName"), CIM_Key, MaxLen (256)
Creation class name of the scoping computer system.
This property is inherited from CIM_OperatingSystem.
CSDVersion
Data type: string
Access type: Read-only
Qualifiers: MappingStrings ("Win32API|System Information Structures|OSVERSIONINFOEX|szCSDVersion")
NULL-terminated string that indicates the latest service pack installed on a computer. If no service pack is installed, the string is NULL.
Example: "Service Pack 3"
CSName
Data type: string
Access type: Read-only
Qualifiers: Propagated ("CIM_ComputerSystem.Name"), CIM_Key, MaxLen (256)
Name of the scoping computer system.
This property is inherited from CIM_OperatingSystem.
CurrentTimeZone
Number, in minutes, an operating system is offset from Greenwich mean time (GMT). The number is positive, negative, or zero.
This property is inherited from CIM_OperatingSystem.
DataExecutionPrevention_32BitApplications
Data type: boolean
Access type: Read-only
Qualifiers: MappingStrings ("WMI").
DataExecutionPrevention_Available
Data type: boolean
Access type: Read-only
Qualifiers: MappingStrings ("WMI").
DataExecutionPrevention_Drivers
Data type: boolean
Access type: Read-only
Qualifiers: MappingStrings ("WMI").
DataExecutionPrevention_SupportPolicy
Data type: uint8
Access type: Read-only
Qualifiers: MappingStrings ("WMI")
Indicates which Data Execution Prevention (DEP) setting is applied. The DEP setting specifies the extent to which DEP applies to 32-bit applications on the system. DEP is always applied to the Windows kernel.
Always Off (0)
DEP is turned off for all 32-bit applications on the computer with no exceptions. This setting is not available for the user interface.
Always On (1)
DEP is enabled for all 32-bit applications on the computer. This setting is not available for the user interface.
Opt In (2)
DEP is enabled for a limited number of binaries, the kernel, and all Windows-based services. However, it is off by default for all 32-bit applications. A user or administrator must explicitly choose either the Always On or the Opt Out setting before DEP can be applied to 32-bit applications.
Opt Out (3)
DEP is enabled by default for all 32-bit applications. A user or administrator can explicitly remove support for a 32-bit application by adding the application to an exceptions list.
Debug
Data type: boolean
Access type: Read-only
Qualifiers: MappingStrings ("Win32API|GetSystemMetrics|SM_DEBUG")
Operating system is a checked (debug) build. If True, the debugging version
Data type: string
Access type: Read/write
Qualifiers: Override ("Description"), MappingStrings ("WMI")
Description of the Windows operating system. Some user interfaces for example, those that allow editing of this description, limit its length to 48 characters.
Distributed
Data type: boolean
Access type: Read-only
If True, the operating system is distributed across several computer system nodes. If so, these nodes should be grouped as a cluster.
This property is inherited from CIM_OperatingSystem.
EncryptionLevel
Data type: uint32
Access type: Read-only
Encryption level for secure transactions: 40-bit, 128-bit, or n-bit.
40-bit (0)
128-bit (1)
n-bit (2)
ForegroundApplicationBoost
Data type: uint8
Access type: Read/write
Qualifiers: MappingStrings ("Win32Registry|SYSTEM\\CurrentControlSet\\Control\\PriorityControl|Win32PrioritySeparation")
Increase in priority is given to the foreground application. Application boost is implemented by giving an application more execution time slices (quantum lengths).
None (0)
The system boosts the quantum length by 6.
Minimum (1)
The system boosts the quantum length by 12.
Maximum (2)
The system boosts the quantum length by 18.
FreePhysicalMemory
Number, in kilobytes, of physical memory currently unused and available.
For more information about using uint64 values in scripts, see Scripting in WMI.
This property is inherited from CIM_OperatingSystem.
FreeSpaceInPagingFiles
Data type: uint64
Access type: Read-only
Qualifiers: MappingStrings ("MIF.DMTF|System Memory Settings|001.4"), Units ("kilobytes")
Number, in kilobytes, that can be mapped into the operating system paging files without causing any other pages to be swapped out.
For more information about using uint64 values in scripts, see Scripting in WMI.
This property is inherited from CIM_OperatingSystem.
FreeVirtualMemory
Number, in kilobytes, of virtual memory currently unused and available.
For more information about using uint64 values in scripts, see Scripting in WMI.
This property is inherited from CIM_OperatingSystem.
InstallDate
Data type: datetime
Access type: Read-only
Qualifiers: MappingStrings ("MIF.DMTF|ComponentID|001.5"), DisplayName ("Install Date")
Date object was installed. This property does not require a value to indicate that the object is installed.
This property is inherited from CIM_ManagedSystemElement.
LargeSystemCache
Data type: uint32
Access type: Read-only
Qualifiers: DEPRECATED
This property is obsolete and not supported.
Optimize for Applications (0)
Optimize memory for applications.
Optimize for System Performance (1)
Optimize memory for system performance.
LastBootUpTime
Data type: datetime
Access type: Read-only
Date and time the operating system was last restarted.
This property is inherited from CIM_OperatingSystem.
LocalDateTime
Data type: datetime
Access type: Read-only
Qualifiers: MappingStrings ("MIB.IETF|HOST-RESOURCES-MIB.hrSystemDate", "MIF.DMTF|General Information|001.6")
Operating system version of the local date and time-of-day.
This property is inherited from CIM_OperatingSystem.
Locale
Data type: string
Access type: Read-only
Qualifiers: MappingStrings ("Win32API|National Language Support Functions|GetLocaleInfo|LOCALE_ILANGUAGE")
Language identifier used by the operating system. A language identifier is a standard international numeric abbreviation for a country/region. Each language has a unique language identifier (LANGID), a 16-bit value that consists of a primary language identifier and a secondary language identifier.
Manufacturer
Data type: string
Access type: Read-only
Qualifiers: MappingStrings ("WMI")
Name of the operating system manufacturer. For Windows-based systems, this value is "Microsoft Corporation".
MaxNumberOfProcesses
Data type: uint32
Access type: Read-only
Qualifiers: MappingStrings ("MIB.IETF|HOST-RESOURCES-MIB.hrSystemMaxProcesses")).
This property is inherited from CIM_OperatingSystem.
MaxProcessMemorySize.
For more information about using uint64 values in scripts, see Scripting in WMI.
This property is inherited from CIM_OperatingSystem.
MUILanguages
Data type: string array
Access type: Read-only
Qualifiers: MappingStrings ("WMI").
Name
Data type: string
Access type: Read-only
Operating system instance within a computer system.
This property is inherited from CIM_OperatingSystem.
NumberOfLicensedUsers
Data type: uint32
Access type: Read-only
Number of user licenses for the operating system. If unlimited, enter 0 (zero). If unknown, enter -1.
This property is inherited from CIM_OperatingSystem.
NumberOfProcesses
Data type: uint32
Access type: Read-only
Qualifiers: MappingStrings ("MIB.IETF|HOST-RESOURCES-MIB.hrSystemProcesses")
Number of process contexts currently loaded or running on the operating system.
This property is inherited from CIM_OperatingSystem.
NumberOfUsers
Data type: uint32
Access type: Read-only
Qualifiers: MappingStrings ("MIB.IETF|HOST-RESOURCES-MIB.hrSystemNumUsers")
Number of user sessions for which the operating system is storing state information currently.
This property is inherited from CIM_OperatingSystem.
OperatingSystemSKU
Data type: uint32
Access type: Read-only
Qualifiers: MappingStrings ("WMI")
Stock Keeping Unit (SKU) number for the operating system. These values are the same as the PRODUCT_* constants defined in WinNT.h that are used with the GetProductInfo function.
The following list lists possible SKU values.
PRODUCT_UNDEFINED (0)
Undefined
PRODUCT_ULTIMATE (1)
Ultimate Edition, e.g. Windows Vista Ultimate.
PRODUCT_HOME_BASIC (2)
Home Basic Edition
PRODUCT_HOME_PREMIUM (3)
Home Premium Edition
PRODUCT_ENTERPRISE (4)
Enterprise Edition
PRODUCT_BUSINESS (6)
Business Edition
PRODUCT_STANDARD_SERVER (7)
Windows Server Standard Edition (Desktop Experience installation)
PRODUCT_DATACENTER_SERVER (8)
Windows Server Datacenter Edition (Desktop Experience installation)
PRODUCT_SMALLBUSINESS_SERVER (9)
Small Business Server Edition
PRODUCT_ENTERPRISE_SERVER (10)
Enterprise Server Edition
PRODUCT_STARTER (11)
Starter Edition
PRODUCT_DATACENTER_SERVER_CORE (12)
Datacenter Server Core Edition
PRODUCT_STANDARD_SERVER_CORE (13)
Standard Server Core Edition
PRODUCT_ENTERPRISE_SERVER_CORE (14)
Enterprise Server Core Edition
PRODUCT_WEB_SERVER (17)
Web Server Edition
PRODUCT_HOME_SERVER (19)
Home Server Edition
PRODUCT_STORAGE_EXPRESS_SERVER (20)
Storage Express Server Edition
PRODUCT_STORAGE_STANDARD_SERVER (21)
Windows Storage Server Standard Edition (Desktop Experience installation)
PRODUCT_STORAGE_WORKGROUP_SERVER (22)
Windows Storage Server Workgroup Edition (Desktop Experience installation)
PRODUCT_STORAGE_ENTERPRISE_SERVER (23)
Storage Enterprise Server Edition
PRODUCT_SERVER_FOR_SMALLBUSINESS (24)
Server For Small Business Edition
PRODUCT_SMALLBUSINESS_SERVER_PREMIUM (25)
Small Business Server Premium Edition
PRODUCT_ENTERPRISE_N (27)
Windows Enterprise Edition
PRODUCT_ULTIMATE_N (28)
Windows Ultimate Edition
PRODUCT_WEB_SERVER_CORE (29)
Windows Server Web Server Edition (Server Core installation)
PRODUCT_STANDARD_SERVER_V (36)
Windows Server Standard Edition without Hyper-V
PRODUCT_DATACENTER_SERVER_V (37)
Windows Server Datacenter Edition without Hyper-V (full installation)
PRODUCT_ENTERPRISE_SERVER_V (38)
Windows Server Enterprise Edition without Hyper-V (full installation)
PRODUCT_DATACENTER_SERVER_CORE_V (39)
Windows Server Datacenter Edition without Hyper-V (Server Core installation)
PRODUCT_STANDARD_SERVER_CORE_V (40)
Windows Server Standard Edition without Hyper-V (Server Core installation)
PRODUCT_ENTERPRISE_SERVER_CORE_V (41)
Windows Server Enterprise Edition without Hyper-V (Server Core installation)
PRODUCT_HYPERV (42)
Microsoft Hyper-V Server
PRODUCT_STORAGE_EXPRESS_SERVER_CORE (43)
Storage Server Express Edition (Server Core installation)
PRODUCT_STORAGE_STANDARD_SERVER_CORE (44)
Storage Server Standard Edition (Server Core installation)
PRODUCT_STORAGE_WORKGROUP_SERVER_CORE (45)
Storage Server Workgroup Edition (Server Core installation)
PRODUCT_STORAGE_ENTERPRISE_SERVER_CORE (46)
Storage Server Enterprise Edition (Server Core installation)
PRODUCT_SB_SOLUTION_SERVER (50)
Windows Server Essentials (Desktop Experience installation)
PRODUCT_SMALLBUSINESS_SERVER_PREMIUM_CORE (63)
Small Business Server Premium (Server Core installation)
PRODUCT_CLUSTER_SERVER_V (64)
Windows Compute Cluster Server without Hyper-V
PRODUCT_CORE_ARM (97)
Windows RT
PRODUCT_CORE (101)
Windows Home
PRODUCT_PROFESSIONAL_WMC (103)
Windows Professional with Media Center
PRODUCT_MOBILE_CORE (104)
Windows Mobile
PRODUCT_IOTUAP (123)
Windows IoT (Internet of Things) Core
PRODUCT_DATACENTER_NANO_SERVER (143)
Windows Server Datacenter Edition (Nano Server installation)
PRODUCT_STANDARD_NANO_SERVER (144)
Windows Server Standard Edition (Nano Server installation)
PRODUCT_DATACENTER_WS_SERVER_CORE (147)
Windows Server Datacenter Edition (Server Core installation)
PRODUCT_STANDARD_WS_SERVER_CORE (148)
Windows Server Standard Edition (Server Core installation)
Organization
Data type: string
Access type: Read-only
Qualifiers: MappingStrings ("Win32Registry|Software\\Microsoft\\Windows\\CurrentVersion|RegisteredOrganization")
Company name for the registered user of the operating system.
Example: "Microsoft Corporation"
OSArchitecture
Data type: string
Access type: Read-only
Architecture of the operating system, as opposed to the processor. This property can be localized.
Example: 32-bit
OSLanguage
Data type: uint32
Access type: Read-only
Qualifiers: MappingStrings ("Win32Registry|DEFAULT\\Control Panel\\International|Locale")
Language version of the operating system installed. The following list lists the possible values. Example: 0x0807 (German, Switzerland).
1 (0x1)
Arabic
4 (0x4)
Chinese (Simplified)– China
9 (0x9)
1025 (0x401)
Arabic – Saudi Arabia
1026 (0x402)
Bulgarian
1027 (0x403)
Catalan
1028 (0x404)
Chinese (Traditional) – Taiwan
1029 (0x405)
Czech
1030 (0x406)
Danish
1031 (0x407)
German – Germany
1032 (0x408)
Greek
1033 (0x409)
English – United States
1034 (0x40A)
Spanish – Traditional Sort
1035 (0x40B)
Finnish
1036 (0x40C)
French – France
1037 (0x40D)
Hebrew
1038 (0x40E)
Hungarian
1039 (0x40F)
Icelandic
1040 (0x410)
Italian – Italy
1041 (0x411)
Japanese
1042 (0x412)
Korean
1043 (0x413)
Dutch – Netherlands
1044 (0x414)
Norwegian – Bokmal
1045 (0x415)
Polish
1046 (0x416)
Portuguese – Brazil
1047 (0x417)
Rhaeto-Romanic
1048 (0x418)
Romanian
1049 (0x419)
Russian
1050 (0x41A)
Croatian
1051 (0x41B)
Slovak
1052 (0x41C)
Albanian
1053 (0x41D)
Swedish
1054 (0x41E)
Thai
1055 (0x41F)
Turkish
1056 (0x420)
Urdu
1057 (0x421)
Indonesian
1058 (0x422)
Ukrainian
1059 (0x423)
Belarusian
1060 (0x424)
Slovenian
1061 (0x425)
Estonian
1062 (0x426)
Latvian
1063 (0x427)
Lithuanian
1065 (0x429)
Persian
1066 (0x42A)
Vietnamese
1069 (0x42D)
Basque (Basque)
1070 (0x42E)
Serbian
1071 (0x42F)
Macedonian (Macedonia (FYROM))
1072 (0x430)
Sutu
1073 (0x431)
Tsonga
1074 (0x432)
Tswana
1076 (0x434)
Xhosa
1077 (0x435)
Zulu
1078 (0x436)
Afrikaans
1080 (0x438)
Faeroese
1081 (0x439)
Hindi
1082 (0x43A)
Maltese
1084 (0x43C)
Scottish Gaelic (United Kingdom)
1085 (0x43D)
Yiddish
1086 (0x43E)
Malay – Malaysia
2049 (0x801)
Arabic – Iraq
2052 (0x804)
Chinese (Simplified) – PRC
2055 (0x807)
German – Switzerland
2057 (0x809)
English – United Kingdom
2058 (0x80A)
Spanish – Mexico
2060 (0x80C)
French – Belgium
2064 (0x810)
Italian – Switzerland
2067 (0x813)
Dutch – Belgium
2068 (0x814)
Norwegian – Nynorsk
2070 (0x816)
Portuguese – Portugal
2072 (0x818)
Romanian – Moldova
2073 (0x819)
Russian – Moldova
2074 (0x81A)
Serbian – Latin
2077 (0x81D)
Swedish – Finland
3073 (0xC01)
Arabic – Egypt
3076 (0xC04)
Chinese (Traditional) – Hong Kong SAR
3079 (0xC07)
German – Austria
3081 (0xC09)
English – Australia
3082 (0xC0A)
Spanish – International Sort
3084 (0xC0C)
French – Canada
3098 (0xC1A)
Serbian – Cyrillic
4097 (0x1001)
Arabic – Libya
4100 (0x1004)
Chinese (Simplified) – Singapore
4103 (0x1007)
German – Luxembourg
4105 (0x1009)
English – Canada
4106 (0x100A)
Spanish – Guatemala
4108 (0x100C)
French – Switzerland
5121 (0x1401)
Arabic – Algeria
5127 (0x1407)
German – Liechtenstein
5129 (0x1409)
English – New Zealand
5130 (0x140A)
Spanish – Costa Rica
5132 (0x140C)
French – Luxembourg
6145 (0x1801)
Arabic – Morocco
6153 (0x1809)
English – Ireland
6154 (0x180A)
Spanish – Panama
7169 (0x1C01)
Arabic – Tunisia
7177 (0x1C09)
English – South Africa
7178 (0x1C0A)
Spanish – Dominican Republic
8193 (0x2001)
Arabic – Oman
8201 (0x2009)
English – Jamaica
8202 (0x200A)
Spanish – Venezuela
9217 (0x2401)
Arabic – Yemen
9226 (0x240A)
Spanish – Colombia
10241 (0x2801)
Arabic – Syria
10249 (0x2809)
English – Belize
10250 (0x280A)
Spanish – Peru
11265 (0x2C01)
Arabic – Jordan
11273 (0x2C09)
English – Trinidad
11274 (0x2C0A)
Spanish – Argentina
12289 (0x3001)
Arabic – Lebanon
12298 (0x300A)
Spanish – Ecuador
13313 (0x3401)
Arabic – Kuwait
13322 (0x340A)
Spanish – Chile
14337 (0x3801)
Arabic – U.A.E.
14346 (0x380A)
Spanish – Uruguay
15361 (0x3C01)
Arabic – Bahrain
15370 (0x3C0A)
Spanish – Paraguay
16385 (0x4001)
Arabic – Qatar
16394 (0x400A)
Spanish – Bolivia
17418 (0x440A)
Spanish – El Salvador
18442 (0x480A)
Spanish – Honduras
19466 (0x4C0A)
Spanish – Nicaragua
20490 (0x500A)
Spanish – Puerto Rico
OSProductSuite
Data type: uint32
Access type: Read-only
Qualifiers: MappingStrings ("Win32Registry|SYSTEM\\CurrentControlSet\\Control\\ProductOptions|ProductSuite"), BitValues ("Small Business", "Enterprise", "BackOffice", "Communication Server", "Terminal Server", "Small Business(Restricted)", "Embedded NT", "Data Center")
Installed and licensed system product additions to the operating system. For example, the value of 146 (0x92) for OSProductSuite indicates Enterprise, Terminal Services, and Data Center (bits one, four, and seven set). The following list lists possible values.
1 (0x1)
Microsoft Small Business Server was once installed, but may have been upgraded to another version of Windows.
2 (0x2)
Windows Server 2008 Enterprise is installed.
4 (0x4)
Windows BackOffice components are installed.
8 (0x8)
Communication Server is installed.
16 (0x10)
Terminal Services is installed.
32 (0x20)
Microsoft Small Business Server is installed with the restrictive client license.
64 (0x40)
Windows Embedded is installed.
128 (0x80)
A Datacenter edition is installed.
256 (0x100)
Terminal Services is installed, but only one interactive session is supported.
512 (0x200)
Windows Home Edition is installed.
1024 (0x400)
Web Server Edition is installed.
8192 (0x2000)
Storage Server Edition is installed.
16384 (0x4000)
Compute Cluster Edition is installed.
OSType
Data type: uint16
Access type: Read-only
Qualifiers: ModelCorrespondence ("CIM_OperatingSystem.OtherTypeDescription")
Type of operating system. The following list identifies the possible values.
This property is inherited from CIM_OperatingSystem.
Unknown (0)
Other (1)
MACOS (2)
MACROS)
Solaris)
OS/390 (60)
VSE (61)
TPF (62)
OtherTypeDescription
Data type: string
Access type: Read-only
Qualifiers: MaxLen (64), ModelCorrespondence ("CIM_OperatingSystem.OSType")
Additional description for the current operating system version.
This property is inherited from CIM_OperatingSystem.
PAEEnabled
Data type: Boolean
Access type: Read-only.
PlusProductID
Data type: string
Access type: Read-only
Qualifiers: MappingStrings ("Win32Registry|Software\\Microsoft\\Windows NT\\CurrentVersion|Plus! ProductId")
Not supported.
PlusVersionNumber
Data type: string
Access type: Read-only
Qualifiers: MappingStrings ("Win32Registry|Software\\Microsoft\\Windows NT\\CurrentVersion|Plus! VersionNumber")
Not supported.
PortableOperatingSystem
Data type: boolean
Access type: Read-only
Specifies whether the operating system booted from an external USB device. If true, the operating system has detected it is booting on a supported locally connected storage device.
Windows Server 2008 R2, Windows 7, Windows Server 2008 and Windows Vista: This property is not supported before Windows 8 and Windows Server 2012.
Primary
Data type: boolean
Access type: Read-only
Qualifiers: MappingStrings ("WMI")
Specifies whether this is the primary operating system.
ProductType
Data type: uint32
Access type: Read-only
Additional system information.
Work Station (1)
Domain Controller (2)
Server (3)
QuantumLength
Data type: uint8
Access type: Read/write
Qualifiers: MappingStrings ("Win32Registry|SYSTEM\\CurrentControlSet\\Control\\PriorityControl|Win32PrioritySeparation")
Not supported
**Windows Server 2008 and Windows Vista: **
The QuantumLength property defines the number of clock ticks per quantum. A quantum is a unit of execution time that the scheduler is allowed to give to an application before switching to other applications. When a thread runs one quantum, the kernel preempts it and moves it to the end of a queue for applications with equal priorities. The actual length of a thread's quantum varies across different Windows platforms. For Windows NT/Windows 2000 only.
The possible values are.
Unknown (0)
One tick (1)
Two ticks (2)
QuantumType
Data type: uint8
Access type: Read/write
Not supported
**Windows Server 2008 and Windows Vista: **
The QuantumType property specifies either fixed or variable length quantums. Windows defaults to variable length quantums where the foreground application has a longer quantum than the background applications. Windows Server defaults to fixed-length quantums. A quantum is a unit of execution time that the scheduler is allowed to give to an application before switching to another application. When a thread runs one quantum, the kernel preempts it and moves it to the end of a queue for applications with equal priorities. The actual length of a thread's quantum varies across different Windows platforms.
The possible values are.
Unknown (0)
Fixed (1)
Variable (2)
RegisteredUser
Data type: string
Access type: Read-only
Qualifiers: MappingStrings ("Win32Registry|Software\\Microsoft\\Windows NT\\CurrentVersion|RegisteredOwner")
Name of the registered user of the operating system.
Example: "Ben Smith"
SerialNumber
Data type: string
Access type: Read-only
Qualifiers: MappingStrings ("Win32Registry|Software\\Microsoft\\Windows NT\\CurrentVersion|ProductId")
Operating system product serial identification number.
Example: "10497-OEM-0031416-71674"
ServicePackMajorVersion
Data type: uint16
Access type: Read-only
Qualifiers: MappingStrings ("Win32API|System Information Structures|OSVERSIONINFOEX|wServicePackMajor")
Major version number of the service pack installed on the computer system. If no service pack has been installed, the value is 0 (zero).
ServicePackMinorVersion
Data type: uint16
Access type: Read-only
Qualifiers: MappingStrings ("Win32API|System Information Structures|OSVERSIONINFOEX|wServicePackMinor")
Minor version number of the service pack installed on the computer system. If no service pack has been installed, the value is 0 (zero).
SizeStoredInPagingFiles
Data type: uint64
Access type: Read-only
Qualifiers: MappingStrings ("MIF.DMTF|System Memory Settings|001.3"), Units ("kilobytes")
Total number of kilobytes that can be stored in the operating system paging files—0 (zero) indicates that there are no paging files. Be aware that this number does not represent the actual physical size of the paging file on disk.
For more information about using uint64 values in scripts, see Scripting in WMI.
This property is inherited from CIM_OperatingSystem..
This property is inherited from CIM_ManagedSystemElement.
OK ("OK")
Error ("Error")
Degraded ("Degraded")
Unknown ("Unknown")
Pred Fail ("Pred Fail")
Starting ("Starting")
Stopping ("Stopping")
Service ("Service")
Stressed ("Stressed")
NonRecover ("NonRecover")
No Contact ("No Contact")
Lost Comm ("Lost Comm")
SuiteMask
Data type: uint32
Access type: Read-only
Qualifiers: BitMap ("0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10"), BitValues ("Windows Server, Small Business Edition", "Windows Server, Enterprise Edition", "Windows Server, Backoffice Edition", "Windows Server, Communications Edition", "Microsoft Terminal Services", "Windows Server, Small Business Edition Restricted", "Windows Embedded", "Windows Server, Datacenter Edition", "Single User", "Windows Home Edition", "Windows Server, Web Edition")
Bit flags that identify the product suites available on the system.
For example, to specify both Personal and BackOffice, set SuiteMask to
4 | 512 or
516.
1
Small Business
2
Enterprise
4
BackOffice
8
Communications
16
Terminal Services
32
Small Business Restricted
64
Embedded Edition
128
Datacenter Edition
256
Single User
512
Home Edition
1024
Web Server Edition
SystemDevice
Data type: string
Access type: Read-only
Qualifiers: MappingStrings ("Win32API|Registry Functions|GetPrivateProfileString|Paths|TargetDevice")
Physical disk partition on which the operating system is installed.
SystemDirectory
Data type: string
Access type: Read-only
Qualifiers: MappingStrings ("Win32API|System Information FunctionsGetSystemDirectory)
System directory of the operating system.
Example: "C:\WINDOWS\SYSTEM32"
SystemDrive
Data type: string
Access type: Read-only
Letter of the disk drive on which the operating system resides. Example: "C:"
TotalSwapSpaceSize.
For more information about using uint64 values in scripts, see Scripting in WMI.
This property is inherited from CIM_OperatingSystem.
TotalVirtualMemorySize.
For more information about using uint64 values in scripts, see Scripting in WMI.
This property is inherited from CIM_OperatingSystem.
TotalVisibleMemorySize
Total amount, in kilobytes, of physical memory available to the operating system. This value does not necessarily indicate the true amount of physical memory, but what is reported to the operating system as available to it.
For more information about using uint64 values in scripts, see Scripting in WMI.
This property is inherited from CIM_OperatingSystem.
Version
Data type: string
Access type: Read-only
Qualifiers: Override ("Version"), MappingStrings ("Win32API|System Information Structures|OSVERSIONINFOEX|dwMajorVersion, dwMinorVersion")
Version number of the operating system.
Example: "4.0"
WindowsDirectory
Data type: string
Access type: Read-only
Qualifiers: MappingStrings ("Win32API|System Information Functions|GetWindowsDirectory")
Windows directory of the operating system.
Example: "C:\WINDOWS"
Remarks
The Win32_OperatingSystem class is derived from CIM_OperatingSystem.
Any operating system that can be installed on a computer that can run a Windows-based operating system is a descendant or member of this class. Win32_OperatingSystem is a singleton class. To get the single instance, use "@" for the key.
Unlike most of the other WMI classes generated by MgmtClassGen, the OperatingSystem.CreateInstance() method will return a blank OperatingSystem object. Therefore, if you are using C# with MgmtClassGen, you can use the following code:
WMI.OperatingSystem os = new ROOT.CIMV2.win32.OperatingSystem();
Examples
You can find a VBScript example that obtains operating system and processor data from Win32_ComputerSystem, Win32_Processor, and Win32_OperatingSystem in the Win32_Processor topic examples.
The Generate Exchange Environment Reports using Powershell PowerShell sample on TechNet Gallery uses a Win32_OperatingSystem class as part of a larger application to generate exchange environment reports.
The Get Server Uptime Using WMI sample in the TechNet Gallery uses the LastBootupTime property to determine how long the server has been active. The sample also uses the timeout option to ensure that the WMI call does not hang.
The WMI Information Retriever VBScript code example on the TechNet Gallery uses the Win32_OperatingSystem class to retrieve OS information from a number of remote computers.
The following script obtains the instances of Win32_OperatingSystem in the default "Root\CIMv2" namespace, and then displays information about the operating system. Next if Err <> 0 Then WScript.Echo Err.Description Err.Clear End if
The following PowerShell code sample displays all the operating information about the current OS.
# get instance $os = Get-WmiObject Win32_OperatingSystem # output information: "The class has {0} properties" -f $os.properties.count "Details on this class:" $os | Format-List *
Requirements
See also | https://docs.microsoft.com/en-us/windows/desktop/CIMWin32Prov/win32-operatingsystem | 2018-08-14T09:18:14 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.microsoft.com |
Sources section?
Shouldn't we include a sources section at the bottom of each page to the template for new articles? This will encourage users to attribute properly and give themselves credit for original articles. — Harishankar 2012/08/24 06:53
Good idea! — Eric Hameleers 2012/08/24 07:22
About the use of the discussion page: is it really useful to add the timestamp of the comments knowing that we live in different time zones? escaflown 2012/09/01 11:52
A part (there, 2nd sentence) seems unclear/wrong to me: “[…]frustrate all these potential writers, the more so since we count on[…]”. Could the author rephrase it please ?
PS: why do I have to use an extra space before the horiz line ? Doesn't work without. — zithro 2012/09/05 11:33
Is any help needed here? What do we need to finish off this article? — V. T. Eric Layton 2012/09/11 17:41 | https://docs.slackware.com/talk:slackdocs:tutorial | 2018-08-14T08:22:00 | CC-MAIN-2018-34 | 1534221208750.9 | [array(['https://docs.slackware.com/lib/plugins/bookcreator/images/add.png',
None], dtype=object)
array(['https://docs.slackware.com/lib/plugins/bookcreator/images/del.png',
None], dtype=object) ] | docs.slackware.com |
Procedure 2.7. You buy 3.2 kilograms of apples at 2 € per kilogram. How much does the whole cost in Australian dollars?
Select the AUD - Australian dollar item if available and if not already selected.
Type
2and press .
Press the button or the * key. Notice the
Xsign at the left of the Input display.
Now type
3.2followed by the button or the Enter key: this means “3.2 units” at 2 € each.
The result in Australian dollars matches
6.4 €.
You could also have used the following order:
3.2 = * 2 € or even
3.2 * 2 €.
Note
It's not possible to multiply X euros by Y dollars, just as you don't multiply 10 fingers by 2 ears. | https://docs.kde.org/trunk4/en/extragear-utils/keurocalc/usage-multiplications.html | 2016-08-31T14:15:56 | CC-MAIN-2016-36 | 1471982290634.12 | [array(['/trunk4/common/top-kde.jpg', None], dtype=object)] | docs.kde.org |
Copy data from Greenplum using Azure Data Factory (Beta)
This article outlines how to use the Copy Activity in Azure Data Factory to copy data from Greenplum..
Important
This connector is currently in Beta. You can try it out and give us feedback. Do not use it in production environments.
Supported capabilities
You can copy data from Greenplum Greenplum connector.
Linked service properties
The following properties are supported for Greenplum linked service:
Example:
{ "name": "GreenplumLinkedService", "properties": { "type": "Greenplum", "typeProperties": { "connectionString": { "type": "SecureString", "value": "HOST=<server>;PORT=<port>;DB=<database>;UID=<user name>;PWD=<password>" } }, "connectVia": { "referenceName": "<name of Integration Runtime>", "type": "IntegrationRuntimeReference" } } }
Dataset properties
For a full list of sections and properties available for defining datasets, see the datasets article. This section provides a list of properties supported by Greenplum dataset.
To copy data from Greenplum, set the type property of the dataset to GreenplumTable. There is no additional type-specific property in this type of dataset.
Example
{ "name": "GreenplumDataset", "properties": { "type": "GreenplumTable", "linkedServiceName": { "referenceName": "<Greenplum linked service name>", "type": "LinkedServiceReference" } } }
Copy activity properties
For a full list of sections and properties available for defining activities, see the Pipelines article. This section provides a list of properties supported by Greenplum source.
GreenplumSource as source
To copy data from Greenplum, set the source type in the copy activity to GreenplumSource. The following properties are supported in the copy activity source section:
Example:
"activities":[ { "name": "CopyFromGreenplum", "type": "Copy", "inputs": [ { "referenceName": "<Greenplum input dataset name>", "type": "DatasetReference" } ], "outputs": [ { "referenceName": "<output dataset name>", "type": "DatasetReference" } ], "typeProperties": { "source": { "type": "GreenplumSource", "query": "SELECT * FROM MyTable" }, "sink": { "type": "<sink type>" } } } ]
Next steps
For a list of data stores supported as sources and sinks by the copy activity in Azure Data Factory, see supported data stores. | https://docs.microsoft.com/en-us/azure/data-factory/connector-greenplum | 2018-02-18T03:22:49 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.microsoft.com |
The guide below shows you how to send a newsletter to all your subscribers in 3 easy steps.
1. Go to the People tab. That’s where every email campaign starts in Metrilo.
2. Then, click on the Email button at the top of the page and choose the way you want to design your email. You can use our block-based email template builder or upload a predesigned HTML file from a ZIP.
- Here you can understand how to build full HTML emails in Metrilo.
- In case that you decide to use our builder, is super easy to use and provides all options you need to design a good-looking email. You can add Products, Coupons, Buttons, Images and text.
3. After you design the email (check out design examples from other Metrilo users: Sample emails), you just click on “send”.
Then, you can go to the Email Performance tab and monitor how the campaign is performing.
If you don’t want to send a bulk email to all your subscribers, check out the following article on how to segment your customer base.
What’s next?
How to start my first automated email marketing campaign? | http://docs.metrilo.com/email-marketing/getting-started/how-to-send-my-first-newsletter | 2018-02-18T02:52:17 | CC-MAIN-2018-09 | 1518891811352.60 | [array(['https://downloads.intercomcdn.com/i/o/35964212/295321be2903039caf16f39b/Barrington_Coffee_Roasting_Co__-_CRM.png',
None], dtype=object) ] | docs.metrilo.com |
.
Production pre-requisites¶
You will minimally need to have Java 8 or greater, Grails, git, ant, a servlet container e.g. tomcat7+, jetty, or resin. An external database such as PostgreSQL or MySQL is generally used for production, but instructions for the H2 database is also provided.
Important note: The default memory for Tomcat and Jetty is insufficient to run Apollo (and most other web apps).You should increase the memory according to these instructions.
Other possible build settings for JBrowse (based on an Ubuntu 16 install):
sudo apt-get update && sudo apt-get install zlib1g-dev libpng-dev libgd2-noxpm-dev build-essential git python-software-properties curl -sL | sudo -E bash - sudo apt-get install nodejs
NOTE: npm (installed with nodejs) must be version 6 or better. If not installed from the above instructions, most stable versions of node.js will supply this. nvm may also be useful.
NOTE: you must link nodejs to to node if your system installs it as a
nodejs binary instead of a node one. E.g.,
sudo ln -s /usr/bin/nodejs /usr/bin/node
Build settings for Apollo specifically. Recent versions of tomcat7 will work, though tomcat8 / .project
Download Apollo from the latest release under source-code and unzip. Test installation by running
./apollo run-local and see that the web-server starts up on. To setup for production continue onto configuration below after install .
If you get an
Unsupported major.minor error or similar, please confirm that the version of java that tomcat is running
ps -ef | grep java is the same as the one you used to build. Setting JAVA_HOME to the Java 8 JDK should fix most problems.
JSON in the URL with newer versions of Tomcat¶
When JSON is added to the URL string (e.g.,
addStores and
addTracks) you may get this error with newer patched versions of Tomcat 7.0.73, 8.0.39, 8.5.7:
java.lang.IllegalArgumentException: Invalid character found in the request target. The valid characters are defined in RFC 7230 and RFC 3986
To fix these, the best solution we’ve come up with (and there may be many) is to explicitly allow these characters, which you can do starting with Tomcat versions: 7.0.76, 8.0.42, 8.5.12.
This is done by adding the following line to
$CATALINA_HOME/conf/catalina.properties:
tomcat.util.http.parser.HttpParser.requestTargetAllow=|{} there is a
sample-docker-apollo-config.groovy which allows control of the configuration via environment variables.
Furthermore, the
apollo-config.groovy has different groovy environments for test, development, and production modes.
The environment will be selected automatically selected depending on how it is run, e.g:
apollo deployor
apollo release.
Configure for Docker:¶
- Set up and export all of the environment variables you wish to configure. At bare minimum you will likely wish to set
WEBAPOLLO_DB_USERNAME,
WEBAPOLLO_DB_PASSWORD,
WEBAPOLLO_DB_DRIVER,
WEBAPOLLO_DB_DIALECT, and
WEBAPOLLO_DB_URI
- Create a new database in your chosen database backend and copy the sample-docker-apollo-config.groovy to apollo-config.groovy.
- Instructions and a script for launching docker with apollo and PostgreSQL. but the main idea here
is to learn how to use
apollo release to construct a build that includes javascript minimization
Pre-requisites for Javascript minimization¶
In addition to the system pre-requisites, the javascript compilation will use nodejs, which can be installed from a package manager on many platforms. Recommended setup for different platforms:
sudo apt-get install nodejs sudo yum install epel-release npm brew install node
Performing the javascript minimization¶
To build a Apollo release with Javascript minimization, you can use the command
./apollo release
This will compile JBrowse and Apollo javascript code into minimized files so that the number of HTTP requests that the client needs to make are reduced.
In all other respects,
apollo release is exactly the same as
apollo deploy though.
Performing active development¶
To perform active development of the codebase, use
./apollo debug
This will launch a temporary instance of Apollo by running
grails run-app and
ant devmode at the same time,
which means that any changes to the Java files will be picked up, allowing fast iteration.
If you modify the javascript files (i.e. the client directory), you can run
scripts/copy_client.sh and these will be
picked up on-the-fly too. | http://genomearchitect.readthedocs.io/en/latest/Setup.html | 2018-02-18T02:38:06 | CC-MAIN-2018-09 | 1518891811352.60 | [] | genomearchitect.readthedocs.io |
Implementing your own optimizer¶
PySwarms aims to be the go-to library for various PSO implementations, so if you are a researcher in swarm intelligence or a developer who wants to contribute, then read on this guide!
As a preliminary, here is a checklist whenever you will implement an optimizer:
- Propose an optimizer
- Write optimizer by inheriting from base classes
- Write a unit test
Proposing an optimizer¶
We wanted to make sure that PySwarms is highly-usable, and thus it is important that optimizers included in this library are either (1) classic textbook-PSO techniques or (2) highly-cited, published, optimization algorithms.
In case you wanted to include your optimization algorithm in this library, please raise an issue and add a short abstract on what your optimizer does. A link to a published paper (it’s okay if it’s behind a paywall) would be really helpful!
Inheriting from base classes¶
Most optimizers in this library inherit its attributes and methods from a set of built-in
base classes. You can check the existing classes in
pyswarms.base.
For example, if we take the
pyswarms.base.base_single class, a base-class for standard single-objective
continuous optimization algorithms such as global-best PSO (
pyswarms.single.global_best) and
local-best PSO (
pyswarms.single.local_best), we can see that it inherits a set of methods as
seen below:
The required methods can be seen in the base classes, and will raise a
NotImplementedError
if not called. Additional methods, private or not, can also be added depending on the needs of your
optimizer.
A short note on keyword arguments¶
The role of keyword arguments, or kwargs in short, is to act as a container for all other parameters
needed for the optimizer. You can define these things in your code, and create assertions to make all
of them required. However, note that in some implementations, required
options might include
c1,
c2, and
w. This is the case in
pyswarms.base.bases for instance.
A short note on
assertions()¶
You might notice that in most base classes, an
assertions() method is being called. This aims
to check if the user-facing input are correct. Although the method is called “assertions”, please make
all user-facing catches as raised Exceptions.
A short note on
__init__.py¶
We make sure that everything can be imported when the whole
pyswarms library is called. Thus,
please make sure to also edit the accompanying
__init__.py file in the directory you are working
on.
For example, if you write your optimizer class
MyOptimizer inside a file called
my_optimizer.py,
and you are working under the
/single directory, please update the
__init__.py like
the following:
from .global_best import GlobalBestPSO from .local_best import LocalBestPSO # Add your module from .my_optimizer import MyOptimizer __all__ = [ "GlobalBestPSO", "LocalBestPSO", "MyOptimizer" # Add your class ]
This ensures that it will be automatically initialized when the whole library is imported.
Writing unit tests¶
Testing is an important element of developing PySwarms, and we wanted everything to be as smooth as
possible, especially when doing the build and integrating. In this case, we provide the
tests
module in the package. In case you add a test for your optimizer, simply name them with the same
convention as in those tests.
You can perform separate checks by
$ python -m unittest tests.optimizers.<test_myoptimizer> | http://pyswarms.readthedocs.io/en/latest/contributing.optimizer.html | 2018-02-18T02:49:00 | CC-MAIN-2018-09 | 1518891811352.60 | [array(['_images/inheritance.png', 'Inheritance from base class'],
dtype=object) ] | pyswarms.readthedocs.io |
value:
unlock() method¶
unlock() method unlocks object when it’s no longer necessary to be locked. See the example above. | http://zf2.readthedocs.io/en/latest/modules/zend.memory.memory-objects.html | 2018-02-18T03:18:32 | CC-MAIN-2018-09 | 1518891811352.60 | [] | zf2.readthedocs.io |
- DesktopPlayer for Mac. end users with a single, unified product that extends the benefits and convenience of local desktop virtualization with the efficiency and control of central, policy-driven management – providing the best of both worlds.
For more information about the DesktopPlayer management component, see the Synchronizer page. | https://docs.citrix.com/en-us/desktopplayer/mac.html | 2018-02-18T03:29:37 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.citrix.com |
Install snapd on Linux Mint
On Linux Mint snapd is available from the official repositories starting with the 18.2 (Sonya) release.
You can install snapd via:
sudo apt install snapd
After install, it is recommended to log out and in to ensure the snapd service is started and binary paths are correct. | https://docs.snapcraft.io/core/install-linux-mint | 2018-02-18T03:19:03 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.snapcraft.io |
Subcommittee on Research and Technology (Committee on Science, Space, and Technology)
Subcommittee on Research and Technology (Committee on Science, Space, and Technology)
Wednesday, December 6, 2017 (10:00 AM)
2318 RHOB Washington, D.C.
Dr. Dawn Tilbury Assistant Director, Directorate for Engineering, National Science Foundation
Mr. Steve Blank Adjunct Professor, Management Science and Engineering, Stanford University
Dr. Dean Chang Associate Vice President, Innovation and Entrepreneurship, University of Maryland; Lead Principal Investigator, DC I-Corps Regional Node
Dr. Sue Carter Professor, Department of Physics, Director, Center for Innovation and Entrepreneurial Development, University of California, Santa Cruz
First Published:
November 29, 2017 at 10:10 AM
Last Updated:
December 6, 2017 at 11:10 AM | http://docs.house.gov/Committee/Calendar/ByEvent.aspx?EventID=106689 | 2018-02-18T03:14:06 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.house.gov |
Wrapper for a value inside a cache that adds timestamp information for expiration and prevents "cache storms" with a Lock. JMM happens-before is ensured with AtomicReference. Objects in cache are assumed to not change after publication.
Gets a value from cache. If the key doesn't exist, it will create the value using the updater callback Prevents cache storms with a lock The key is always added to the cache. Null return values will also be cached. You can use this together with ConcurrentLinkedHashMap to create a bounded LRU cache
map- the cache map
key- the key to look up
timeoutMillis- cache entry timeout
updater- callback to create/update value
cacheEntryClass- CacheEntry implementation class to use
returnExpiredWhileUpdating- when true, return expired value while updating new value
cacheRequestObject- context object that gets passed to hasExpired, shouldUpdate and updateValue methods, not used in default implementation
gets the current value from the entry and updates it if it's older than timeout updater is a callback for creating an updated value. | http://docs.grails.org/latest/api/grails/util/CacheEntry.html | 2016-10-21T13:15:25 | CC-MAIN-2016-44 | 1476988718278.43 | [] | docs.grails.org |
This.; see Section 13.2.1, “'Tail'ing Files”.
<int-jmx:tree-polling-channel-adapter/>is provided; this.
IntegrationMBeanExporternow allows the configuration of a custom
ObjectNamingStrategyusing the
naming-strategyattribute.
For more information, see Section 8.1, “JMX Support”.”.”.
The”.” | http://docs.spring.io/spring-integration/docs/3.0.0.BUILD-SNAPSHOT/reference/html/whats-new.html | 2016-10-21T13:21:08 | CC-MAIN-2016-44 | 1476988718278.43 | [] | docs.spring.io |
HTML5 JSON Report Format
These examples are simply links to existing ServiceStack web services, which based on your browsers user-agent (i.e. Accept: ‘text/html’) provides this HTML format instead of the other serialization formats.
- Nort:
Based on convention, it generates a recursive and cascading view of your data using a combination of different sized definition lists and tables where appropriate. After it’s rendered convenience behaviour is applied allowing you to sort your tabular data, view the embedded JSON contents as well as providing links back to the original web service that generated the report including links to the other formats supported.
Completely self-contained
The report does not have any external CSS, JavaScript or Images which also helps achieve its super-fast load-time and rendering speed.
Embeds the complete snapshot of your web services data
The report embeds a complete, unaltered version of your ‘JSON webservice’ capturing a snapshot of the state of your data at a given point in time. It’s perfect for backups with the same document containing a human and programatic access to the data. The JSON data is embedded inside a valid and well-formed document, making it programatically accessible using a standard XML/HTML parser. The report also includes an interface to allow humans to copy it from a textbox.
It’s Fast
Like the other web services, the HTML format is just a raw C# IHttpHandler using .NET’s fastest JSON Serializer to serialize the response DTO to a JSON string which is embedded inside a static HTML string template. No other .aspx page or MVC web framework is used to get in the way to slow things down. High performance JavaScript techniques are also used to start generating the report at the earliest possible time.
Well supported in all modern browsers
It’s tested and works equally well on the latest versions of Chrome, Firefox, Safari, Opera and IE9. v1.83 Now works in IE8 but needs internet connection to download json2.js. (not tested in <=IE7)
It Validates (as reported by validator.w3.org)
This shouldn’t be important but as the technique of using JavaScript to generate the entire report is likely to ire the semantic HTML police, I thought I’d make this point. Personally I believe this style of report is more useful since it caters for both human and scripted clients.
How it works - ‘view-source’ reveals all :)
This is a new type of HTML5 report that breaks the normal conventional techniques of generating a HTML page. Instead of using a server programming and template language to generate the HTML on the server, the data is simply embedded as JSON, unaltered inside the tag:
<script id="dto" type="text/json">{jsondto}</script>
Because of the browser behaviour of the script tag, you can embed any markup or javascript unescaped. Unless it has none or a ‘javascript’ type attribute, the browser doesn’t execute it letting you to access the contents with:
document.getElementById('dto').innerHTML
From there, javascript invoked just before the closing body tag (i.e. the fastest time to run DOM-altering javascript) which takes the data, builds up a html string and injects the generated markup in the contents of the page.
After the report is rendered, and because JavaScript can :) UX-friendly behaviours are applied to the document allowing the user to sort the table data on each column as well as providing an easy way to take a copy of the JSON datasource.
For what it does, the JavaScript is very terse considering no JavaScript framework was used. In most cases the JSON payload is a lot larger than the entire JavaScript used to render the report :)
Advantages of a strong-typed, code-first web service framework
Although hard to believe, most of the above web service examples were developed before ServiceStack’s CSV and HTML format existed. No code-changes were required in order to take advantage of the new formats, they were automatically available after replacing the ServiceStack.dlls with the latest version (v1.81+)
Being able to generically provide new features like this shows the advantage of ServiceStack’s strong-typed, code-first approach to developing web services that lets you focus on your app-specific logic as you only need to return C#/.NET objects or throw C#/.NET exceptions which gets automatically handled, and hosted on a number of different endpoints in a variety of different formats.
Out of the box REST, RPC and SOAP endpoints types are automatically provided, in JSON, XML, CSV, JSV and now the new HTML report format above. | http://docs.servicestack.net/html5reportformat.html | 2019-01-16T03:21:04 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs.servicestack.net |
Functions
Component Replication
RPCs
Animation Montage
Blend Spaces
Audio System Overview
Blueprint Editor Reference
Cascade Particle Systems
C++ Only
Input
Programming Basics
Slate UI Framework
The First Person Shooter sample is an example of a PC multiplayer first-person shooter. It includes basic implementations of
weapons and gametypes along with a simple front end menu system.
A complete list of the featured concepts:
The base firing functionality for the weapons - such as ammo management, reloading, and replication - is implemented
in the AShooterWeapon class.
AShooterWeapon
The weapon is switched to its firing state on the local client and server (via RPC calls). DetermineWeaponState()
is called in StartFire()/StopFire() which performs some logic to decide which state the weapon should be in and then
calls SetWeaponState() to place the weapon into the appropriate state. Once in firing state, the local client will
repeatedly call HandleFiring() which, in turn, calls FireWeapon(). Then it updates ammo and calls ServerHandleFiring()
to do the same on the server. The server version is also responsible for notifying remote clients about each fired round via the
BurstCounter variable.
DetermineWeaponState()
StartFire()
StopFire()
SetWeaponState()
HandleFiring()
FireWeapon()
ServerHandleFiring()
BurstCounter
Actions performed on the remote clients are purely cosmetic. Weapon fire is replicated using the BurstCounter
property so that the remote clients can play animations and spawn effects - perform all of the visual aspects of the
weapon firing.
Instant-hit detection is used for fast firing weapons, such as rifles or laser guns. The basic concept is that when the player
fires the weapon, a line check is performed in the direction the weapon is aimed at that instant to see if anything would be hit.
This method allows high precision and works with Actors that do not exist on server side (e.g., cosmetic or torn off). The
local client performs the calculations and informs the server of what was hit. Then, the server verifies the hit and replicates
it if necessary.
In FireWeapon(), the local client does a trace from the camera location to find the first blocking hit under the crosshair
and passes it to ProcessInstantHit(). From there, one of three things happens:
ProcessInstantHit()
The hit is sent to the server for verification (ServerNotifyHit() --> ProcessInstantHit_Confirmed()).
ServerNotifyHit()
ProcessInstantHit_Confirmed()
If the hit Actor does not exist on server, the hit is processed locally (ProcessInstantHit_Confirmed()).
If nothing was hit, the server is notified (ServerNotifyMiss()).
ServerNotifyMiss()
Confirmed hits apply damage to the hit Actors, spawn trail and impact effects, and notify remote clients by setting data
about the hit in the HitNotify variable. Misses just spawn trails and set HitNotify for remote clients, which look
for HitNotify changes and perform the same trace as the local client, spawning trails and impacts as needed.
HitNotify
The instant-hit implementation also features weapon spread. For trace/verification consistency, local client picks a random
seed each time FireWeapon() is executed and passes it in every RPC and HitNotify pack.
Projectile fire is used to simulate weapons that fire rounds which are slower moving, explode on impact, affected by gravity,
etc. These are cases where the outcome of the weapon fire cannot be determined at the exact instant the weapon is fired, such
as launching a grenade. For this type of weapon, an actual physical object, or projectile, is spawned and sent moving in the
direction the weapon is aimed. A hit is determined by the projectile colliding with another object in the world.
For projectile fire, the local client does a trace from camera to check what Actor is under the crosshair
in FireWeapon(), similar to the instant-hit implementation. If the player is aiming at something, it adjusts the fire direction
to hit that spot and calls ServerFireProjectile() on the server to spawn a projectile Actor in the direction the weapon was aimed.
ServerFireProjectile()
When the movement component of the projectile detects a hit on the server, it explodes dealing damage, spawning effects, and
tears off from replication to notify the client about that event. Then, the projectile turns off collision, movement, and visibility
and destroys itself after one second to give client time for replication update.
On clients, explosion effects are replicated via OnRep_Exploded().
OnRep_Exploded()
The player's inventory is an array of AShooterWeapon references stored in the Inventory property of the player's
Pawn (AShooterCharacter). The currently equipped weapon is replicated from the server, and additionally, AShooterCharacter
stores its current weapon locally in CurrentWeapon property, which allows the previous weapon to be un-equipped
when a new weapon is equipped.
Inventory
AShooterCharacter
CurrentWeapon
When the player equips a weapon, the appropriate weapon mesh - first-person for local, third-person for others - is attached
to the Pawn and an animation is played on the weapon. The weapon is switched to the equipping state for the duration of the
animation.
In first-person mode, the Pawn's mesh is hard-attached to the camera so that the arms always appear relative to the player's view.
The downside of this approach is that it means the legs are not visible in the player's view, since the entire mesh rotates
to match the camera yaw and pitch.
The basic flow of the camera update is:
AShooterCamera::UpdateCamera() is executed each tick.
AShooterCamera::UpdateCamera()
APlayerCamera::UpdateCamera() is called to update the camera rotation based on the player's input.
APlayerCamera::UpdateCamera()
AShooterCharacter::OnCameraUpdate() is called to perform the calculations necessary to rotate the first person mesh to match the camera.
AShooterCharacter::OnCameraUpdate()
When the player dies, it switches to a death camera that has a fixed location and rotation set in the AShooterPlayerController::PawnDied()
handler. This function calls AShooterPlayerController::FindDeathCameraSpot(), which cycles through several different
locations and uses the first one not obstructed by the level's geometry.
AShooterPlayerController::PawnDied()
AShooterPlayerController::FindDeathCameraSpot()
Online multiplayer matches are divided into 3 stages:
Warm up
Match play
Game over
When the first player joins the game, the warm up stage begins. This is a short period, marked by a
countdown timer, that gives other players a chance to join. During this period, the players are in spectator mode allowing them
to fly around the map. When the countdown timer expires, StartMatch() is called to restart all of the players and spawn their
Pawns.
StartMatch()
Matches are timed, with the game time being calculated server side in the AShooterGameMode::DefaultTimer() function,
which is executed using a looping timer with a rate equal to the current time dilation that equates to once every game second.
This is stored in the RemainingTime property of the game replication info class (AShooterGRI), which is then replicated
to clients. When the time remaining reaches zero, FinishMatch() is called to end the game session. This notifies all players
the match has ended and disables movement and health.
AShooterGameMode::DefaultTimer()
RemainingTime
AShooterGRI
FinishMatch()
The menu system is created using the Slate UI framework. It consists of menus, menu widgets, and menu items.
Each menu has a single menu widget (SSHooterMenuWidget) that is responsible for layout, internal event handling, and animations
for all of the menu items. Menu items (SSHooterMenuItem) are compound objects that can perform actions and contain any number
of other menu items. These can be as simple as a label or button or "tabs" that contain complete submenus made up of other menu items.
This menu can be operated using a keyboard or controller, but there is only limited mouse support at this time.
SSHooterMenuWidget
SSHooterMenuItem SShooterMenuWidget.h file.
Construct()
AddMenuItem()
AddMenuItemSP()
MenuHelper
SShooterMenuWidget.h.
MenuHistory
Animations are performed using interpolation curves defined in SShooterMenuWidget::SetupAnimations(). Each curve has start time,
duration, and interpolation method. Animations can be played forward and in reverse and their attributes can be animated at a specific time
using GetLerp(), which returns a value from 0.0f to 1.0f. There are several different interpolation methods available, defined in
ECurveEaseFunction::Type in SlateAnimation.h.
SShooterMenuWidget::SetupAnimations()
GetLerp()
ECurveEaseFunction::Type
SlateAnimation.h
The main menu is opened automatically when the game starts by specifying the ShooterEntry map as the default. It loads a special
GameMode, AShooterGameMode, that uses the AShooterPlayerController_Menu class which opens the main menu by creating a new
instance of the FShooterMainMenu class in its PostInitializeComponents() function.
AShooterGameMode
AShooterPlayerController_Menu
FShooterMainMenu
PostInitializeComponents()
The in-game menu is created in the PostInitializeComponents() function of the AShooterPlayerController class, and opened or closed
via the OnToggleInGameMenu() function.
AShooterPlayerController
OnToggleInGameMenu()
The options menu is available as a submenu of both the main menu and in-game menu. The only difference is how changes are applied:
When accessed from the main menu, changes are applied when the player starts the game.
When accessed from the in-game menu, changes are applied immediately when the menu is closed.
The settings in the options menu are saved to GameUserSettings.ini, and loaded automatically at startup.
GameUserSettings.ini | https://docs.unrealengine.com/en-us/Resources/SampleGames/ShooterGame | 2019-01-16T03:59:27 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs.unrealengine.com |
django-tables2 - An app for creating HTML tables¶
Its features include:
- Any iterable can be a data-source, but special support for Django QuerySets is included.
- The built in UI does not rely on JavaScript.
- Support for automatic table generation based on a Django model.
- Supports custom column functionality via subclassing.
- Pagination.
- Column based table sorting.
- Template tag to enable trivial rendering to HTML.
- Generic view mixin.
About the app:
- Available on pypi
- Tested with python 2.7, 3.4, 3.5, 3.6 and Django 1.11, Travis CI
- Documentation on readthedocs.org
- Bug tracker
Table of contents¶
Getting started
Customization
- Alternative column data
- Alternative column ordering
- Column and row attributes
- Customizing headers and footers
- Swapping the position of columns
- Pagination
- Table Mixins
- Customizing table style
- Query string fields
- Controlling localization
- Class Based Generic Mixins
- Pinned rows
- Filtering data in your table
- Exporting table data
Reference | https://django-tables2.readthedocs.io/en/latest/ | 2019-01-16T04:22:49 | CC-MAIN-2019-04 | 1547583656665.34 | [] | django-tables2.readthedocs.io |
Accessing The Application
AWS Marketplace:
If you have launched Ideata analytics application using AWS marketplace, you will need to wait for the EC2 instance to get started. Once the instance is up and available, you can connect to the running Ideata analytics application by using the following link:
http://<hostname_OR_IP_Address>
You can also find the details of the running Ideata analytics server on AWS Marketplace library at the below link. For the instance running Ideata analytics application, click on "Access Software" link next to Ideata analytics' subscription
When you are accessing the application for the first time, you will be asked to register. To verify your identify, you will be asked to provide the instance ID where the application is running. You can find the instance id using the marketplace library as explained in the previous step.
Once you have signed up, you will receive a email with a verification link in provided email's inbox. You will need to click on the link to complete the verification.
Once verified, you can will be able to login to the application with the login credential you had provided.
On Premise:
Ideata analytics currently only support Linux based servers for deployments.
Once Ideata team has provided you to If you are installing Ideata Analytics on your local machine, navigate to bin folder in your installation directory and run the following command to start Ideata server:
./ideata-analytics.sh start
Once the application has started, navigate to the following link to access the application:
http://<hostname_OR_IP_Address>:8900
In case the default port 8900 is not available, Ideata analytics will try to run the application on port 8901 and 8902 as next options.
Once you login to the application using the username and password provided by Ideata analytics team, you will be asked to provide the license key to your server. Please fill the license key provided by Ideata team to proceed.
Stop server:
To stop Ideata analytics server, use the following command:
./ideata-analytics.sh stop
Azure Marketplace:
Google cloud Launcher:
Free trial on Orbitera: | https://docs.ideata-analytics.com/installation/initial-setup-and-application-access.html | 2019-01-16T04:44:10 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs.ideata-analytics.com |
Guide¶
The PyFilesytem interface simplifies most aspects of working with files and directories. This guide covers what you need to know about working with FS objects.
Why use PyFilesystem?¶
If you are comfortable using the Python standard library, you may be wondering; why learn another API for working with files?
The PyFilesystem API is generally simpler than the
os and
io modules – there are fewer edge cases and less ways to shoot yourself in the foot. This may be reason alone to use it, but there are other compelling reasons you should use
import fs for even straightforward filesystem code.
The abstraction offered by FS objects means that you can write code that is agnostic to where your files are physically located. For instance, if you wrote a function that searches a directory for duplicates files, it will work unaltered with a directory on your hard-drive, or in a zip file, on an FTP server, on Amazon S3, etc.
As long as an FS object exists for your chosen filesystem (or any data store that resembles a filesystem), you can use the same API. This means that you can defer the decision regarding where you store data to later. If you decide to store configuration in the cloud, it could be a single line change and not a major refactor.
PyFilesystem can also be beneficial for unit-testing; by swapping the OS filesystem with an in-memory filesystem, you can write tests without having to manage (or mock) file IO. And you can be sure that your code will work on Linux, MacOS, and Windows.
Opening Filesystems¶
There are two ways you can open a filesystem. The first and most natural way is to import the appropriate filesystem class and construct it.
Here’s how you would open a
OSFS (Operating System File System), which maps to the files and directories of your hard-drive:
>>> from fs.osfs import OSFS >>> home_fs = OSFS("~/")
This constructs an FS object which manages the files and directories under a given system path. In this case,
'~/', which is a shortcut for your home directory.
Here’s how you would list the files/directories in your home directory:
>>> home_fs.listdir('/') ['world domination.doc', 'paella-recipe.txt', 'jokes.txt', 'projects']
Notice that the parameter to
listdir is a single forward slash, indicating that we want to list the root of the filesystem. This is because from the point of view of
home_fs, the root is the directory we used to construct the
OSFS.
Also note that it is a forward slash, even on Windows. This is because FS paths are in a consistent format regardless of the platform. Details such as the separator and encoding are abstracted away. See Paths for details.
Other filesystems interfaces may have other requirements for their constructor. For instance, here is how you would open a FTP filesystem:
>>> from ftpfs import FTPFS >>> debian_fs = FTPFS('') >>> debian_fs.listdir('/') ['debian-archive', 'debian-backports', 'debian', 'pub', 'robots.txt']
The second, and more general way of opening filesystems objects, is via an opener which opens a filesystem from a URL-like syntax. Here’s an alternative way of opening your home directory:
>>> from fs import open_fs >>> home_fs = open_fs('osfs://~/') >>> home_fs.listdir('/') ['world domination.doc', 'paella-recipe.txt', 'jokes.txt', 'projects']
The opener system is particularly useful when you want to store the physical location of your application’s files in a configuration file.
If you don’t specify the protocol in the FS URL, then PyFilesystem will assume you want a OSFS relative from the current working directory. So the following would be an equivalent way of opening your home directory:
>>> from fs import open_fs >>> home_fs = open_fs('.') >>> home_fs.listdir('/') ['world domination.doc', 'paella-recipe.txt', 'jokes.txt', 'projects']
Tree Printing¶
Calling
tree() on a FS object will print an ascii tree view of your filesystem. Here’s an example:
>>> from fs import open_fs >>> my_fs = open_fs('.') >>> my_fs.tree() ├── locale │ └── readme.txt ├── logic │ ├── content.xml │ ├── data.xml │ ├── mountpoints.xml │ └── readme.txt ├── lib.ini └── readme.txt
This can be a useful debugging aid!
Closing¶
FS objects have a
close() methd which will perform any required clean-up actions. For many filesystems (notably
OSFS), the
close method does very little. Other filesystems may only finalize files or release resources once
close() is called.
You can call
close explicitly once you are finished using a filesystem. For example:
>>> home_fs = open_fs('osfs://~/') >>> home_fs.writetext('reminder.txt', 'buy coffee') >>> home_fs.close()
If you use FS objects as a context manager,
close will be called automatically. The following is equivalent to the previous example:
>>> with open_fs('osfs://~/') as home_fs: ... home_fs.writetext('reminder.txt', 'buy coffee')
Using FS objects as a context manager is recommended as it will ensure every FS is closed.
Directory Information¶
Filesystem objects have a
listdir() method which is similar to
os.listdir; it takes a path to a directory and returns a list of file names. Here’s an example:
>>> home_fs.listdir('/projects') ['fs', 'moya', 'README.md']
An alternative method exists for listing directories;
scandir() returns an iterable of Resource Info objects. Here’s an example:
>>> directory = list(home_fs.scandir('/projects')) >>> directory [<dir 'fs'>, <dir 'moya'>, <file 'README.md'>]
Info objects have a number of advantages over just a filename. For instance you can tell if an info object references a file or a directory with the
is_dir attribute, without an additional system call. Info objects may also contain information such as size, modified time, etc. if you request it in the
namespaces parameter.
Note
The reason that
scandir returns an iterable rather than a list, is that it can be more efficient to retrieve directory information in chunks if the directory is very large, or if the information must be retrieved over a network.
Additionally, FS objects have a
filterdir() method which extends
scandir with the ability to filter directory contents by wildcard(s). Here’s how you might find all the Python files in a directory:
>>> code_fs = OSFS('~/projects/src') >>> directory = list(code_fs.filterdir('/', files=['*.py']))
By default, the resource information objects returned by
scandir and
listdir will contain only the file name and the
is_dir flag. You can request additional information with the
namespaces parameter. Here’s how you can request additional details (such as file size and file modified times):
>>> directory = code_fs.filterdir('/', files=['*.py'], namespaces=['details'])
This will add a
size and
modified property (and others) to the resource info objects. Which makes code such as this work:
>>> sum(info.size for info in directory)
See Resource Info for more information.
Sub Directories¶
PyFilesystem has no notion of a current working directory, so you won’t find a
chdir method on FS objects. Fortunately you won’t miss it; working with sub-directories is a breeze with PyFilesystem.
You can always specify a directory with methods which accept a path. For instance,
home_fs.listdir('/projects') would get the directory listing for the
projects directory. Alternatively, you can call
opendir() which returns a new FS object for the sub-directory.
For example, here’s how you could list the directory contents of a
projects folder in your home directory:
>>> home_fs = open_fs('~/') >>> projects_fs = home_fs.opendir('/projects') >>> projects_fs.listdir('/') ['fs', 'moya', 'README.md']
When you call
opendir, the FS object returns an instance of a
SubFS. If you call any of the methods on a
SubFS object, it will be as though you called the same method on the parent filesystem with a path relative to the sub-directory.
The
makedir and
makedirs methods also return
SubFS objects for the newly create directory. Here’s how you might create a new directory in
~/projects and initialize it with a couple of files:
>>> home_fs = open_fs('~/') >>> game_fs = home_fs.makedirs('projects/game') >>> game_fs.touch('__init__.py') >>> game_fs.writetext('README.md', "Tetris clone") >>> game_fs.listdir('/') ['__init__.py', 'README.md']
Working with
SubFS objects means that you can generally avoid writing much path manipulation code, which tends to be error prone.
Working with Files¶
You can open a file from a FS object with
open(), which is very similar to
io.open in the standard library. Here’s how you might open a file called “reminder.txt” in your home directory:
>>> with open_fs('~/') as home_fs: ... with home_fs.open('reminder.txt') as reminder_file: ... print(reminder_file.read()) buy coffee
In the case of a
OSFS, a standard file-like object will be returned. Other filesystems may return a different object supporting the same methods. For instance,
MemoryFS will return a
io.BytesIO object.
PyFilesystem also offers a number of shortcuts for common file related operations. For instance,
readbytes() will return the file contents as a bytes, and
readtext() will read unicode text. These methods is generally preferable to explicitly opening files, as the FS object may have an optimized implementation.
Other shortcut methods are
download(),
upload(),
writebytes(),
writetext().
Walking¶
Often you will need to scan the files in a given directory, and any sub-directories. This is known as walking the filesystem.
Here’s how you would print the paths to all your Python files in your home directory:
>>> from fs import open_fs >>> home_fs = open_fs('~/') >>> for path in home_fs.walk.files(filter=['*.py']): ... print(path)
The
walk attribute on FS objects is instance of a
BoundWalker, which should be able to handle most directory walking requirements.
See Walking for more information on walking directories.
Globbing¶
Closely related to walking a filesystem is globbing, which is a slightly higher level way of scanning filesystems. Paths can be filtered by a glob pattern, which is similar to a wildcard (such as
*.py), but can match multiple levels of a directory structure.
Here’s an example of globbing, which removes all the
.pyc files in your project directory:
>>> from fs import open_fs >>> open_fs('~/project').glob('**/*.pyc').remove() 62
See Globbing for more information.
Moving and Copying¶
You can move and copy file contents with
move() and
copy() methods, and the equivalent
movedir() and
copydir() methods which operate on directories rather than files.
These move and copy methods are optimized where possible, and depending on the implementation, they may be more performant than reading and writing files.
To move and/or copy files between filesystems (as apposed to within the same filesystem), use the
move and
copy modules. The methods in these modules accept both FS objects and FS URLS. For instance, the following will compress the contents of your projects folder:
>>> from fs.copy import copy_fs >>> copy_fs('~/projects', 'zip://projects.zip')
Which is the equivalent to this, more verbose, code:
>>> from fs.copy import copy_fs >>> from fs.osfs import OSFS >>> from fs.zipfs import ZipFS >>> copy_fs(OSFS('~/projects'), ZipFS('projects.zip'))
The
copy_fs() and
copy_dir() functions also accept a
Walker parameter, which can you use to filter the files that will be copied. For instance, if you only wanted back up your python files, you could use something like this:
>>> from fs.copy import copy_fs >>> from fs.walk import Walker >>> copy_fs('~/projects', 'zip://projects.zip', walker=Walker(filter=['*.py']))
An alternative to copying is mirroring, which will copy a filesystem them keep it up to date by copying only changed files / directories. See
mirror(). | https://pyfilesystem2.readthedocs.io/en/latest/guide.html | 2019-01-16T03:43:34 | CC-MAIN-2019-04 | 1547583656665.34 | [] | pyfilesystem2.readthedocs.io |
Tasks¶
Tasks model the work carried out by one or more personas. This work is described in environment-specific narrative scenarios, which illustrate how the system is used to augment the work activity.
Adding, updating, or deleting a task¶
- Click on the UX/Tasks menu to open the Tasks table, and click on the Add button to open the Task form.
- Enter a task name, and the objective of carrying out the task.
- Click on the Add button in the environment table, and select an environment to situate the task in. This will add the new environment to the environment list.
- In the Dependencies folder, enter any dependencies needing to hold before this task can take place.
- In the Narrative folder, enter the task scenario. This narrative should describe how the persona (or personas) carry out the task to achieve the pre-defined objective.
- In the Consequences folder, enter any consequences (positive or negative) associated with this task.
- In the Benefits folder, enter the value that completing this task will bring.
- In the Participants folder, click on the Add button to associate a persona with this task. In the Participating Persona form, select the person, the task duration (seconds, minutes, hours or longer), frequency (hourly or more, daily-weekly, monthly or less),demands (none, low, medium, high), and goal conflict (none, low, medium, high). The values for low, medium, and high should be agreed with participants before hand.
- If any aspect of the task concerns one or more assets, then these can be added to the concern list. Adding an asset concern causes a concern comment to be associated to the asset in the asset model. If the task task.
- Existing tasks can be modified by clicking on the task in the Tasks table, making the necessary changes, and clicking on the Update button.
- To delete a task, select the task to delete in the Tasks dialog box, and click the Delete button. If any artifacts are dependent on this task then a dialog box stating these dependencies are displayed. The user has the option of selecting Yes to remove the task dependencies and the task itself, or No to cancel the deletion.
Task traceability¶
Tasks can be manually traced to certain artifacts via the Tasks table. A task may contribute to a vulnerability, or be supported by a requirement or use case. To add a traceability link, right click on the task name, and select Supported By or Contributes to. This opens the Traceability Editor. From this editor, select the object on the right hand side of the editor to trace to and click the Add button to add this link.
Manual traceability links can be removed by selecting the Options/Traceability menu option, to open the Traceability Relations form. In this form, manual traceability relations be removed from specific environments.
Visualising tasks¶
Task models show the contribution that behavioural concepts in security and usability can have on each other. These models are centred around tasks, show the personas that interact with them, and indicate how threats or vulnerabilities might impact them. These models also show the assets used in the tasks or threatened/exploited by misuse cases. If traceability associations have been added between tasks and use cases, then these links are also shown. Finally, if use case actors are also roles associated with personas in visible tasks, then the relationship between the roles and personas is also shown. This is useful when putting use cases and their actors in context in tasks.
Task models can be viewed by selecting the Models/Task menu, and selecting the environment to view the model for.
By changing the environment name in the environment combo box, the task model for a different environment can be viewed. The model can also be filtered by task or misuse case name.
By clicking on a model element, information about that artifact can be viewed.
For details on how to print task models as SVG files, see Generating Documentation. | https://cairis.readthedocs.io/en/latest/tasks.html | 2019-01-16T04:24:46 | CC-MAIN-2019-04 | 1547583656665.34 | [array(['_images/TaskDialog.jpg', 'Task Dialog'], dtype=object)
array(['_images/AddTaskPersona.jpg', 'Participating Persona form'],
dtype=object)
array(['_images/TraceabilityEditor.jpg', 'Traceability Editor'],
dtype=object)
array(['_images/TaskModelKey.jpg', 'Task Model key'], dtype=object)
array(['_images/TaskModel.jpg', 'Task Model'], dtype=object)] | cairis.readthedocs.io |
Creating new functions
Overview
Creating new routes is super easy!
Simply open your app in Begin, and in the left nav find the section of the type of route (or event) you want to create (i.e.
HTML Routes). Then click its corresponding
Add new route button.
All routes begin with
/, and can include letters, numbers, and slashes, up to 25 characters.
After clicking
Add Route, the following things happen automatically:
- The new route is saved to your Begin app's configuration
- Infrastructure is provisioned to make the route publicly available
- A basic route handler is committed to your project in the
src/html/folder
- A build is kicked off, and, if green, is deployed to
staging(but not
production)
✨ Tip: It's possible to have multiple HTTP methods respond from the same URL path! For example:
GET /contact-usand
POST /contact-usis totally valid.
Using URL parameters to create dynamic paths
It's possible to build dynamic paths using Express-style URL parameters, like:
GET /shop/:item
URL parameters are passed to your route via the
req.params object.
For example, the route used to serve this page is
GET /:lang/:cat/:doc. When a client requests the path
/en/routes-functions/creating-new-routes/, the function handling this route receives a
req object containing:
{ params: { lang: 'en', cat: 'routes-functions', doc: 'creating-new-routes' } }
More about routes in Begin
Want to learn a little more about how routes in Begin are born?
As it happens, the AWS infrastructure needed to marshal your app's requests and responses (API Gateway) is separate and different from the cloud compute that runs your code (Lambda).
This architecture is capable of enormous scale, but can be quite complex to manage. Fortunately, this is one of the many things Begin manages for you behind the scenes.
Routes in Begin apps generally consist of the following parts:
- A publicly available URL route (represented by a path and HTTP method) in two separate, fully isolated AWS API Gateways – one for
staging, one for
production– that call to...
- Your route handler code, which runs in two separate, fully isolated AWS Lambdas – again, one for
staging, one for
production– which support sessions out of the box via...
- Your app's session and data stores.
Edit this page on GitHub | https://docs.begin.com/en/functions/creating-new-functions/ | 2019-01-16T04:48:47 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs.begin.com |
The default WooCommerce fields cannot be deleted, but only disabled. You can still edit them by clicking on the “Edit” button you will find in each field.
In this form you will find:
- “Label” – Chose the title related to a specific field.
- “Placeholder” – Chose the text customer are going to read within the field.
- “Tooltip” – (This field is only going to appear if you have chosen to “Enable Tooltip” in the “Settings” page). Chose the text that will be displayed in the field tooltip.
- “Position” – Choose the position of the field in respect to the line in which it’s inserted. “First” to set it on the left, “last” to set it on the right, “Wide” if you want it to take up the whole line space.
- “Class” – Allows you to set graphical CSS rules related to the field.
- “Label Class” – Allows you to set graphical CSS rules related to the label.
- “Validation” – Chose whether the field requires any kind of validation since it may require information such as phone number, VAT, country, email or zip code.
At the bottom of this panel you will find two checkboxes:
- “Required” – Set a field as required.
- “Clear Row” – In order for the fields to work correctly, each time a field is entered as First (in Position) and followed by a Last field, this second field needs to have the Clear Row option checked. | https://docs.yithemes.com/yith-woocommerce-checkout-manager/settings/editing-existing-fields/ | 2019-01-16T04:49:29 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs.yithemes.com |
Customizing a Xamarin.Forms Map
Xamarin.Forms.Maps provides a cross-platform abstraction for displaying maps that use the native map APIs on each platform, to provide a fast and familiar map experience for users.
Customizing a Map Pin
This article explains how to create a custom renderer for the
Map control, which displays a native map with a customized pin and a customized view of the pin data on each platform.
Highlighting a Circular Area on a Map
This article explains how to add a circular overlay to a map, to highlight a circular area of the map.
Highlighting a Region on a Map
This article explains how to add a polygon overlay to a map, to highlight a region on the map. Polygons are a closed shape and have their interiors filled in.
Highlighting a Route on a Map
This article explains how to add a polyline overlay to a map. A polyline overlay is a series of connected line segments that are typically used to show a route on a map, or form any shape that's required. | https://docs.microsoft.com/en-us/xamarin/xamarin-forms/app-fundamentals/custom-renderer/map/ | 2019-01-16T03:24:37 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs.microsoft.com |
Customizing the Conversion Cats beyond just the theme options (i.e. creating a child theme)
This is meant for the more advanced people out there (or those of you hiring a developer to help create some customizations). Basically, if you want to custom the Conversion Cats theme beyond what we have in the theme settings, you should use a child theme.
Because we love you, we've created a basic child theme that you can use as a starting point and then go as crazy as you want to from there ;)
Download the Conversion Cats child theme here.
Enjoy! | https://docs.conversioncats.com/article/144-customizing-the-conversion-cats-beyond-just-the-theme-options-i-e-creating-a-child-theme | 2019-01-16T03:34:17 | CC-MAIN-2019-04 | 1547583656665.34 | [] | docs.conversioncats.com |
Install GitHub App for elmah.io
Generate Personal Access Token
In order to allow elmah.io to create issues on GitHub, you will need to generate a Personal Access Token. To do so, log into GitHub and go to the New personal access token page.
Input a token description, click the Generate token button and copy the generated token (colored with a green background).
Install the GitHub App on elmah.io
Log into elmah.io and go to the log settings. Click the Apps tab. Locate the GitHub app and click the Install button:
Paste the token copied in the previous step into the Token textbox. In the Owner textbox, input the name of the user or organization owning the repository you want to create issues in. In the Repository textbox input the name of the repository.
Click Save and the app is added to your log. When new errors are logged, issues are automatically created in the configured GitHub | https://docs.elmah.io/elmah-io-apps-github/ | 2019-01-16T04:33:36 | CC-MAIN-2019-04 | 1547583656665.34 | [array(['../images/apps/github/generate_token.png', 'OAuth Tokens Page'],
dtype=object)
array(['../images/apps/github/install_github.png', 'Install GitHub App'],
dtype=object) ] | docs.elmah.io |
Subcommittee on Commerce, Manufacturing, and Trade (Committee on Energy and Commerce)
Subcommittee on Commerce, Manufacturing, and Trade (Committee on Energy and Commerce)
Wednesday, November 13, 2013 (10:00 AM)
2322 RHOB Washington, D.C.
Ms. Donna Benefield President, International Walking Horse Association
Dr. John Bennett Doctor of Veterinary Medicine, Equine Services
Ms. Teresa Bippen President, Friends of Sound Horses
Dr. Ron DeHaven Executive Vice President & CEO, American Veterinary Medicine
Mr. James Hickey, Jr. President, American Horse Council
Mr. Marty Irby International Director & Former President, Tenneessee Walking Horse Breeders' & Exhibitors'
The Honorable Julius Johnson Commissioner, Tennessee Department of Agriculture
First Published:
November 6, 2013 at 03:16 PM
Last Updated:
January 28, 2015 at 12:02 AM | http://docs.house.gov/Committee/Calendar/ByEvent.aspx?EventID=101469 | 2016-10-21T11:13:47 | CC-MAIN-2016-44 | 1476988717963.49 | [] | docs.house.gov |
Use XML schemas
Starting with version 4.1, Splunk provides RelaxNG formatted schemas for the view XML, including the simplified dashboards, simplified form searches, advanced dashboards and views. Also, there are schemas available for the navigation XML, the setup XML and manager pages XML. You can find all of these schemas off the info endpoint:
These schema files are in RelaxNG compact syntax (
*.rnc). But you can convert to other formats with Trang. Trang is an open source tool that lets you convert between different XML schema formats.
Here's an example of using Trang to convert from Relax to RelaxNG
java -jar trang.jar -O rng all.rnc all.rng
Files
Here's a descriptive list of all the files available from the info endpoint:
ValidationSplunk provides a validation script,
validate_all.py, located at:
$SPLUNK_HOME/share/splunk/search_mrsparkle/exposed/schema/
This script inspects the UI XML files present here in Splunk installation:
$SPLUNK_HOME/etc/
To validate your XML files, first navigate to the location listed below and then run the script:
cd $SPLUNK_HOME/share/splunk/search_mrsparkle/exposed/schema/ $SPLUNK_HOME/bin/splunk cmd python validate_all.py
This documentation applies to the following versions of Splunk® Enterprise: 4.3, 4.3.1, 4.3.2, 4.3.3, 4.3.4, 4.3.5, 4.3.6, 4.3.7
Feedback submitted, thanks! | http://docs.splunk.com/Documentation/Splunk/4.3/Developer/AdvancedSchemas | 2016-10-21T11:25:57 | CC-MAIN-2016-44 | 1476988717963.49 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Table of Contents
mactrack
Download
Latest: mactrack-v2.9-1.tgz
Archive: mactrack-2.6-1.tgz
Please note that this version is stable. However, it does not come pre-installed with a Device Type Database. If upgrading from a prior release, the Device Type database will be upgraded as well. PIA 2.x
Installation
- Simply place in the plugins directory and install like any other PIA 2.x plugin.
Usage
Additional Help?
- If you need additional help, please goto
ChangeLog
— 2.9 —
- bug#0001799: Select all checkbox on Mac Addresses tab not working
- bug#0001723: Can't import devices
- bug#0001751: Multiple ignore ports for Extreme Network switch
- bug#0001777: MacTrack MacAuth Filters not working
- bug#0001779: Site IP Range Report View
- bug#0001841: Mactrack ARP table unreadable
- bug: Saving an edited site redirects to blank page
— 2.8 —
- compat: Allow proper navigation text generation
- bug: Guest user could not access site
— 2.7 —
- bug#0001742: PHP error () in lib/mactrack_extreme.php
- bug#0001743: DEBUG: Authorized MAC ID: empty
- bug: Correcting SNMPv3 and Cisco support
- bug: Importing OUIDB broken with redefine function error
- bug: Exporting Devices from the Console did not work
- bug: Lastchange was not displaying correctly
— 2.6 —
- feature:#0001718: New get_arp_table function for Extreme Networks devices
- bug#0001665: New line missing when printing Cisco device stats
- bug#0001668: Mactrack sometimes shows “No results found” when results are shown
- bug#0001670: spikekill and mactrack JS functions clash stateChanged and getfromserver
- bug#0001677: index initialization errors (courtesy toe_cutter)
- bug#0001677: unintended overwrite of non-synced devices (courtesy toe_cutter)
- bug#0001717: Ip addresses range report a false value
- bug: Undefined index when paging through interfaces
- bug: Graph View still being called for one class of graphs incorrectly
- bug: With Scan Date set to 'All' rows counter was not correct for matches
- bug: When viewing IP's, device filter not operable
— 2.5 —
- bug#0001677: Undefined indexes
- bug: Undefined index reports in mactrak_view_graphs.php
- bug: Small visual issue with Site Details
- bug: Interfaces table not being created during upgrade
- feature: Portname search filter courtesy KAA and gthe
— 2.4 —
- bug: 0001546: mactrack_view_devices does not display the proper page numbers
- bug: 0001545: mactrack_scanner does not complete successfully for some hosts
- bug: 0001547: mactrack_view_sites does not show page numbers
- bug: 0001548: mactrack_view_macs various issues
- bug: IEEE Database import runs out of memory
- bug: Correct uninitialized error in mactrack_hp.php
- bug: Resolved issue where Vendor Mac was lost during resolver process
- bug: fix syntax of cacti_snmp_* calls
- feature: 0001638: Allow Interface Data to Be “Scaned” on Demand from the WebUI Similar to Scanner
- feature: 0001637: Enable Site Level Scanning Through the UI and Using Ajax
- feature: Adding support for ArpWatch
- feature: Adding Juniper Support
- feature: Support Foundry Dual Mode Ports
- feature: Implement MacWatch in code. E-Mailing now.
- feature: Adding significant content to the mac_track_interfaces table
- feature: Implement MacAuth functionality and periodic reports
- feature: Implement Interfaces tab to MacTrack
- feature: Aggregated ports patch from Gthe!
- feature: Linux and DLink Scanners from Gthe!
- feature: add all device SNMP options to mac_track_devices
- feature: add full SNMP V3 support
- feature: deprecate readstrings; maintain “SNMP Option Sets” in favour of them
- feature: some Enterasys scanning functions (N7, C2, C3)
- feature: import cacti devices into mac_track_devices (new action hook for host.php)
- feature: optionally sync SNMP settings of mac_track_devices and cacti device, governed by a config setting (defaulting to “none”) to allow either a) mactrack → host (when scanning) or b) host → mactrack updates (when manually updating the host)
- feature: allow for “connecting” existing mactrack devices to cacti devices (via hostname, new action for mactrack_devices.php)
- feature: copy snmp settings from cacti devices to mactrack devices (via host_id connection, new action for mactrack_devices.php)
- feature: Allow mapping of Cacti graphs to MacTrack
- feature: Add columns for AutoNegotiation - Implementation TBD
— 1.1 —
- First Official Release
— 0.1 —
- Initial Release! Oh, so long ago
ToDo's
- Add AutoNegotiation Information to the Interfaces Table
- Complete the SSH/Telnet Integration using the Remote Plugin
- Track devices on Cisco etherchannel interfaces
- Allow User to Select and Interface and See Additional Details
- Expose Duplex Through the UI
- For device types, define XML templates that define a device monitoring dashboard portlet layout
- Allow Enabling/Disabling of Ports for Authorized Users | http://docs.cacti.net/plugin:mactrack | 2016-10-21T11:09:44 | CC-MAIN-2016-44 | 1476988717963.49 | [] | docs.cacti.net |
NOTICE: Our WHMCS Addons are discontinued and not supported anymore.
Most of the addons are now released for free in github -
You can download them from GitHub and conribute your changes ! :)
WHMCS Resellers Auto Name Servers :: Changelog
1.0.108/12/2015
- Case 623 - WHMCS 6 compatibility issues
1.0.008/12/2015
- Case 622 - Initial Release | https://docs.jetapps.com/whmcs-resellers-auto-name-servers-changelog | 2018-05-20T15:44:38 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.jetapps.com |
For virtual machines provisioned by using cloning or Linux kickstart/autoYaST provisioning and cloud machines provisioned in Red Hat OpenStack by using kickstart, it is possible to assign static IP addresses from a predefined range.
By default, vRealize Automation uses Dynamic Host Configuration Protocol (DHCP) to assign IP addresses to provisioned machines.
Fabric administrators can create network profiles to define a range of static IP addresses that can be assigned to machines. Network profiles can be assigned to specific network paths on a reservation. Any cloud machine or virtual machine provisioned by cloning or kickstart/autoYaST that is attached to a network path that has an associated network profile is provisioned using static IP address assignment.
Tenant administrators or business group managers can also assign network profiles to blueprints by using the custom property VirtualMachine.NetworkN.ProfileName. If a network profile is specified in both the blueprint and the reservation, the profile specified in the blueprint takes precedence.
When a machine that has a static IP address is destroyed, its IP address is made available for use by other machines. The process to reclaim static IP addresses runs every 30 minutes, so unused addresses may not be available immediately after the machines using them are destroyed. If there are not available IP addresses in the network profile, machines cannot be provisioned with static IP assignment on the associated network path. | https://docs.vmware.com/en/vRealize-Automation/6.2/com.vmware.vra.iaas.cloud.doc/GUID-CBB6810F-4C3E-4492-91E1-012BE371D623.html | 2018-05-20T15:33:02 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.vmware.com |
How to: Uninstall the Stand-Alone Version of Report Builder (Report Builder 3.0)
You can uninstall the stand-alone version of Report Builder from the control panel or the command line. You cannot uninstall the ClickOnce version of Report Builder without uninstalling SQL Server 2008 R2 Reporting Services.
Uninstalling Report Builder from the command line uses syntax that is identical to the syntax you use to install Report Builder, except you use the /x option instead of the /i option. Command lines for uninstalling can also include the /quiet option and other standard options. If the Report Builder Windows Installer Package (ReportBuilder3_x86.msi) has been removed, you cannot use the command line easily to uninstall Report Builder. To learn more about how you might be able to remove Report Builder by using its GUID, see the documentation for the msiexec program in the msdn library.
If folders used by Report Builder include custom files, the folders and the files are preserved when Report Builder is removed. Only the Report Builder files are removed.
To uninstall Report Builder from the control panel
On the Start menu, click Control Panel.
In the Control Panel, click Programs and Features.
Locate Microsoft SQL Server 2008 R2 Report Builder 3.0 in the Name list and click it.
Click Uninstall.
If prompted to confirm the uninstall of Report Builder, click Yes.
To uninstall Report Builder from the command line
On the Start menu, click Run.
In the Open text box, type cmd.
In the command prompt window, navigate to the folder with ReportBuilder3_x86.msi.
Type a basic command line such as the following:
msiexec /x ReportBuilder3_x86.msi /quiet /l*v install.log
If you can to include logging, use a command line such as the following:
msiexec /x ReportBuilder3_x86.msi /quiet /l*v c:\junk\install.log
- Press Enter. | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ff519544(v=sql.105) | 2018-05-20T16:25:28 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.microsoft.com |
Decision Insight 20180319 How to fetch information on demand using an action mashlet Context Sometimes, in Decision Insight, you might have some information that does not need to be updated automatically all the time. For this type of information, you can configure an action mashlet so that some information is retrieved on demand only. In a dashboard pagelet, the action mashlet is a button you can click at will whenever you want the data on the pagelet to be refreshed. Note: This mashlet has been designed to fetch information on a specified instance. For example, you have a Batch entity used to retrieve additional data from a remote system. Configure an action mashlet to retrieve that remote data on demand. Action mashlet configuration To add an action mashlet to a dashboard, follow these steps. Step Action 1 Navigate to a dashboard where you want to add an action mashlet. 2 Put the dashboard in Edit mode. 3 Click the Edit icon for a pagelet. 4 In the values area, click the Add value button. The Edit value pop up is displayed. 5 Select the Action mashlet. 6 Click the Select attribute link and Select the attribute you need. Based on our example, click the Select hyperlink for the instance attribute the Batch entity. 7 To format the appearance of your action mashlet button, on the left menu, click Format. You can define the Label, that is the text under the button, and the Action, that is the text on the button itself. 8 To trigger the correct action when you click your button on the dashboard, the button requires an alias. To configure an alias, on the left menu, click Actions. 9 In the action alias field, enter, for example, fetch_batch. Click the copy icon to the right of the field so you can reuse that alias when specifying a route for the remote data. Route configuration Define a route based on the alias for the action mashlet, that is using <from uri="action:fetch_batch"/> : <routes xmlns="" xmlns: <route> <from uri="tnd-action:fetch_batch"/> <!-- Use action properties to fetch informations --> <setHeader headerName="actionLabel"> <simple>Action '${body[action]}' on '${body[instance]}' from '${body[dashboard]}' by '${body[user]}' at VT '${body[validTime]}' TT '${body[transactionTime]}' </simple> </setHeader> <doTry> ... <!-- Set the body at the end to some string. It will be displayed in the UI --> <setBody><simple>${header.actionLabel} Done!</simple></setBody> <doCatch> <exception>java.lang.Exception</exception> <throwException exceptionType="java.lang.RuntimeException" message="Unable to fetch data from webservice because..." /> </doCatch> </doTry> </route> </routes> When the action is invoked, the route body is set with the following property map: Property Description action The action alias instance The UUID of the instance input selected in the mashlet validTime The instance creation valid time transactionTime The instance creation transaction time dashboard The dashboard name where the action is invoked from user The user name that invoked the action Notes: Only one route per action alias can be started at a time If the route throws an error and there is no try/catch, the action is reported as failed with a generic message, the message of the exception in the route above. If the deployment is stopped, all cached statuses are lost. The dashboard is not refreshed automatically when an action is completed. It is refreshed based on its own rhythm or you can refresh it manually. Related Links | https://docs.axway.com/bundle/DecisionInsight_20180319_allOS_en_HTML5/page/how_to_fetch_information_on_demand_using_an_action_mashlet.html | 2018-05-20T15:48:30 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.axway.com |
Add a Web server to the farm
Applies To: Office SharePoint Server 2007
This Office product will reach end of support on October 10, 2017. To stay supported, you will need to upgrade. For more information, see , Resources to help you upgrade your Office 2007 servers and clients.
Topic Last Modified: 2008-08-04.
Important
You must be a member of the Farm Administrators SharePoint group and the Administrators group on the local server computer to complete these procedures.
Add a Web server by using the user interface
From the product disc or file share that contains the SharePoint Products and Technologies installation files, run Setup., on the drop-down list, click the configuration database..
Add a front-end Web server to the farm by using the command line
Open a command prompt window, and then change to the disk or file share that contains the SharePoint Products and Technologies installation files.
Type the following command, and then press ENTER:
setup.exe /config \\<server name>\<folder name>\<configuration file name>
Note
When you run Setup.exe, you include a space after the command, followed by a forward slash (/) and the name of the switch, and sometimes followed by another space and one or more parameters. A setup configuration file is necessary when installing SharePoint Products and Technologies from the command line. You may need to choose from several different configuration files depending on the type of server you are installing. Be sure to choose the configuration file that will result in a front-end Web server. For more information about Setup.exe parameters, see Setup.exe command-line reference (Office SharePoint Server). that you want to connect to.
See Also
Concepts
Manage | https://docs.microsoft.com/en-us/previous-versions/office/sharepoint-2007-products-and-technologies/cc261752(v=office.12) | 2018-05-20T15:48:36 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.microsoft.com |
Default date and time fields Certain time fields are provided by default to store particular date and time fields. Global timestamp fieldsAll records inherit certain time stamp fields from the Global [global] table.Task fields for measuring work timeUse default task fields to measure progress and resolution for certain records.Planned task time fieldsRelated TasksConfigure a formRelated ReferenceDate and time fieldsGlobal date and time field format | https://docs.servicenow.com/bundle/geneva-servicenow-platform/page/administer/time/concept/c_DefaultDateAndTimeFields.html | 2018-05-20T15:39:18 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.servicenow.com |
Project Workspace Site Relinker tool
This Office product will reach end of support on October 10, 2017. To stay supported, you will need to upgrade. For more information, see , Resources to help you upgrade your Office 2007 servers and clients.
Topic Last Modified: 2016-11-14 (\&displaylang=e) the Project Server 2007 PRK from the Microsoft Download Center.
Requirements
The Project Workspace Site Relinker tool has the following usage requirements:
Microsoft Windows XP, Windows Vista, or Windows Server 2003.
Microsoft .NET Framework 2.x or 3.x.
Relink project workspace sites.
Download this book
This topic is included in the following downloadable book for easier reading and printing:
See the full list of available books at Downloadable content for Project Server 2007. | https://docs.microsoft.com/en-us/previous-versions/office/project-server-2007/cc197498(v=office.12) | 2018-05-20T16:45:38 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.microsoft.com |
Use consistent timezones in queries When you are performing time/date-bounded queries, ensure that both the client and users are set to the same timezone in order to avoid time lapses. GlideRecord performs filtering based on the instance timezone and the ODBC client is filtered based on the Windows timezone. The result is an intersection of the two timezones which can cause a loss of queried information if these timezones do not match. | https://docs.servicenow.com/bundle/geneva-servicenow-platform/page/app-store/good_practices/odbc/concept/odbc-use-consistent-timezones.html | 2018-05-20T15:35:44 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.servicenow.com |
Pasting Reversed Cycles
Once you've copied an animated sequence, you can paste back its drawings in the reverse order.
NOTE: You can perform the same operation using the Paste Special dialog box. To open the Paste Special dialog box, select Edit > Paste Special or press Ctrl + B (Windows/Linux) or ⌘ + B (Mac OS X)
—see About Copying Motions.
How to paste a reversed cycleHow to paste a reversed cycle
- In the Timeline or Xsheet view, select the cell range to paste in reverse order.
- Reverse.
- Right-click and select Paste Reverse.
- Press Ctrl + . (Windows/Linux) or ⌘ + . (Mac OS X). | https://docs.toonboom.com/help/harmony-14/premium/timing/paste-reversed-cycle.html | 2020-10-20T05:14:22 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.toonboom.com |
Using chrony for time synchronization
Some operating systems offer
chronyd
as an alternative to
ntpd for network time
synchronization. The OS package is called
chrony and contains both the NTP
server
chronyd and the
chronyc utilities.
If using
chronyd for time synchronization
at Kudu nodes, the
rtcsync option must be enabled in
chrony.conf. Without
rtcsync, the local machine's
clock will always be reported as unsynchronized and Kudu masters and tablet servers will
not be able to start. The following code explains the observed behavior of
chronyd when setting the synchronization status of the clock on
Linux. | https://docs.cloudera.com/cdp-private-cloud-base/7.1.4/troubleshooting-kudu/topics/kudu-using-chrony-for-time-synchronization.html | 2020-10-20T07:01:35 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.cloudera.com |
Introduction
Learn what OTA Connect is and why you should use it.
What is OTA Connect?
HERE OTA Connect is a highly secure, open, over-the-air software management solution, designed specifically for the automotive industry to deliver software and firmware updates to embedded devices in a cost-effective and scalable manner. This technology enables software-based vehicle updates and upgrades that can reduce costly recalls, introduce new on-demand functionality for user satisfaction, and pave the way for new revenue opportunities inside of vehicles.
Why use OTA Connect?
For starters, it’s highly secure. HERE OTA Connect was developed according to IEEE-ISTO 6100, the Uptane standard for secure over-the-air updates for automotive. Uptane is the first comprehensive security framework guiding the implementation of OTA update systems on a design level, and we’ve been an active participant every step of the way, developing the first commercial implementation and contributing heavily to the standardization work.
OTA Connect is also completely open source, so you don’t have to worry about being tied to a proprietary solution. Absolutely everything on the client side is open source and highly customizable. We offer a SaaS solution for all server-side functionality, but if you’d prefer to set up your own server, most of our server-side functionality is open source too—including all of the security-sensitive elements.
It also has a great set of features. You can use it to create targeted update campaigns, monitor existing campaigns, and securely update software and firmware inside a vehicle. You can use our developer tools to enable OTA updates on your devices and our user interface to distribute updates to those devices.
Who is this guide for?
This guide is for developers who want to set up OTA update functionality on devices. You can start with our standalone client and then integrate our client library into your own projects.
Before you start these steps, make sure that you meet the following prerequisites:
Access to a Linux-based operating system.
The more technical steps should work on Mac OS. If you’re a Windows user, you can download a Linux-based software image and install it in on a virtual machine by using a free tool such as Oracle VM VirtualBox.
Experience with the Linux-command line.
You might run into trouble if you don’t understand what some of the commands in these steps are doing.
Knowledge of C++.
This is necessary if you intend to do any integration work. You’ll need be familiar with C++ to follow our examples.
A login to the OTA Connect Portal
If you don’t have one yet, you can find out how to get one in the OTA Portal Connect guide.
If those prerequisites look OK to you, you can start by getting to know our developer tools. | https://docs.ota.here.com/ota-client/latest/index.html | 2020-10-20T05:40:30 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.ota.here.com |
Setting Up
This tutorial is intended for new users who want detailed instructions on how to build a basic Runnerty project. 🚀 Let's start!
RequirementsRequirements
GitGit
Git is a version control system for tracking changes in source code during software development and it can help you synchronize and version files between your local system and your online repository. Git for Windows includes Git Bash, a terminal application. If not already installed, see Installing Git.
Node.jsNode.
Alternatively, you can download an installer from the Node.js homepage.
Check your Node.js installationCheck your Node.js installation
Check that you have the minimum required version installed by running the following command:
You should see a version larger than Node 12.
Runnerty v2 minimum supported Node.js version is Node 12, but more recent versions will work as well.
Runnerty (optional)Runnerty (optional)
For this setup it is not necessary to install runnerty (it has been included as a dependency to simplify the process) but it is recommended to install it when you want to advance a little more.
- Open Terminal and simply runs this command:
You should see a version.
Here you have more information about the use of runnerty in the terminal | https://docs.runnerty.io/ | 2020-10-20T06:57:12 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.runnerty.io |
There are a number of device-specific properties that you can access. See the script reference pages for SystemInfo.deviceUniqueIdentifier, SystemInfo.deviceName, SystemInfo.deviceModel and SystemInfo.operatingSystem.
Pirates will often hack an application (by removing AppStore DRM protection) and then redistribute it for free. Unity comes with an anti-piracy check which allows you to determine if your application was altered after it was submitted to the AppStore.
You can check if your application is genuine (not-hacked) with the Application.genuine property. If this property returns false then you might notify user that they are using a hacked application or maybe disable access to some functions of your application.
Note: Application.genuineCheckAvailable should be used along with Application.genuine to verify that application integrity can actually be confirmed. Accessing the Application.genuine property is a fairly expensive operation and so you shouldn’t do it during frame updates or other time-critical code.
You can trigger a vibration by calling Handheld.Vibrate. However, devices lacking vibration hardware will just ignore this call.
Mobile OSs have built-in activity indicators, that you can use during slow operations. Please check Handheld.StartActivityIndicator docs for usage sample.
Unity iOS/Android/Tizen allows you to control current screen orientation. Detecting a change in orientation or forcing some specific orientation can be useful if you want to create game behaviors depending on how the user is holding the device.
You can retrieve device orientation by accessing the Screen.orientation property. Orientation can be one of the following:
You can control screen orientation by setting Screen.orientation to one of those, or to ScreenOrientation.AutoRotation. When you want auto-rotation, you can disable some orientation on a case by case basis. See the script reference pages for Screen.autorotateToPortrait, Screen.autorotateToPortraitUpsideDown, Screen.autorotateToLandscapeLeft andScreen.autorotateToLandscapeRight
Different device generations support different functionality and have widely varying performance. You should query the device’s generation and decide which functionality should be disabled to compensate for slower devices. You can find the device generation from the iOS.DeviceGeneration property.
More information about different device generations, performance and supported functionality can be found in our iPhone Hardware Guide. | https://docs.unity3d.com/560/Documentation/Manual/iOSMobileAdvanced.html | 2020-10-20T06:08:45 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.unity3d.com |
Common Settings¶
The Hardware Recommendations section provides some hardware guidelines for
configuring a Ceph Storage Cluster. It is possible for a single Ceph
Node to run multiple daemons. For example, a single node with multiple drives
may run one
ceph-osd for each drive. Ideally, you will have a node for a
particular type of process. For example, some nodes may run
ceph-osd
daemons, other nodes may run
ceph-mds daemons, and still other nodes may
run
ceph-mon daemons.
Each node has a name identified by the
host setting. Monitors also specify
a network address and port (i.e., domain name or IP address) identified by the
addr setting. A basic configuration file will typically specify only
minimal settings for each instance of monitor daemons. For example:
[global] mon_initial_members = ceph1 mon_host = 10.0.0.1
Important
The
host setting is the short name of the node (i.e., not
an fqdn). It is NOT an IP address either. Enter
hostname -s on
the command line to retrieve the name of the node. Do not use
host
settings for anything other than initial monitors unless you are deploying
Ceph manually. You MUST NOT specify
host under individual daemons
when using deployment tools like
chef or
cephadm, as those tools
will enter the appropriate values for you in the cluster map.
Networks¶
See the Network Configuration Reference for a detailed discussion about configuring a network for use with Ceph.
Monitors¶
Ceph production clusters typically deploy with a minimum 3 Ceph Monitor daemons to ensure high availability should a monitor instance crash. At least three (3) monitors ensures that the Paxos algorithm can determine which version of the Ceph Cluster Map is the most recent from a majority of Ceph Monitors in the quorum.
Note
You may deploy Ceph with a single monitor, but if the instance fails, the lack of other monitors may interrupt data service availability.
Ceph Monitors normally listen on port
3300 for the new v2 protocol, and
6789 for the old v1 protocol.
By default, Ceph expects that you will store a monitor’s data under the following path:
/var/lib/ceph/mon/$cluster-$id
You or a deployment tool (e.g.,
cephadm) must create the corresponding
directory. With metavariables fully expressed and a cluster named “ceph”, the
foregoing directory would evaluate to:
/var/lib/ceph/mon/ceph-a
For additional details, see the Monitor Config Reference.
Authentication¶
New in version Bobtail: 0.56
For Bobtail (v 0.56) and beyond, you should expressly enable or disable
authentication in the
[global] section of your Ceph configuration file.
auth cluster required = cephx auth service required = cephx auth client required = cephx
Additionally, you should enable message signing. See Cephx Config Reference for details.
OSDs¶
Ceph production clusters typically deploy Ceph OSD Daemons where one node has one OSD daemon running a filestore on one storage drive. A typical deployment specifies a journal size. For example:
[osd] osd journal size = 10000 [osd.0] host = {hostname} #manual deployments only.
By default, Ceph expects that you will store a Ceph OSD Daemon’s data with the following path:
/var/lib/ceph/osd/$cluster-$id
You or a deployment tool (e.g.,
cephadm) must create the corresponding
directory. With metavariables fully expressed and a cluster named “ceph”, the
foregoing directory would evaluate to:
/var/lib/ceph/osd/ceph-0
You may override this path using the
osd data setting. We don’t recommend
changing the default location. Create the default directory on your OSD host.
ssh {osd-host} sudo mkdir /var/lib/ceph/osd/ceph-{osd-number}
The
osd data path ideally leads to a mount point with a hard disk that is
separate from the hard disk storing and running the operating system and
daemons. If the OSD is for a disk other than the OS disk, prepare it for
use with Ceph, and mount it to the directory you just created:
.. prompt:: bash $
ssh {new-osd-host} sudo mkfs -t {fstype} /dev/{disk} sudo mount -o user_xattr /dev/{hdd} /var/lib/ceph/osd/ceph-{osd-number}
We recommend using the
xfs file system when running
mkfs. (
btrfs and
ext4 are not recommended and no
longer tested.)
See the OSD Config Reference for additional configuration details.
Heartbeats¶
During runtime operations, Ceph OSD Daemons check up on other Ceph OSD Daemons and report their findings to the Ceph Monitor. You do not have to provide any settings. However, if you have network latency issues, you may wish to modify the settings.
See Configuring Monitor/OSD Interaction for additional details.
Logs / Debugging¶
Sometimes you may encounter issues with Ceph that require modifying logging output and using Ceph’s debugging. See Debugging and Logging for details on log rotation.
Example ceph.conf¶
[global] fsid = {cluster-id} mon initial members = {hostname}[, {hostname}] mon host = {ip-address}[, {ip-address}] #All clusters have a front-side public network. #If you have two NICs, you can configure a back side cluster #network for OSD object replication, heart beats, backfilling, #recovery, etc. public network = {network}[, {network}] #cluster network = {network}[, {network}] #Clusters require authentication by default. auth cluster required = cephx auth service required = cephx auth client required = cephx #Choose reasonable numbers for your journals, number of replicas #and placement groups. osd journal size = {n} osd pool default size = {n} # Write an object n times. osd pool default min size = {n} # Allow writing n copy in a degraded state. osd pool default pg num = {n} osd pool default pgp num = {n} = {n}
Running Multiple Clusters (DEPRECATED)¶
Some Ceph CLI commands take a
-c (cluster name) option. This option is
present purely for backward compatibility. You should not attempt to deploy
or run multiple clusters on the same hardware, and it is recommended to always
leave the cluster name as the default (“ceph”).
If you need to allow multiple clusters to exist on the same host, please use Cephadm, which uses containers to fully isolate each cluster. | https://docs.ceph.com/en/latest/rados/configuration/common/ | 2020-10-20T06:06:28 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.ceph.com |
TOPICS×
Workflow Step Reference
Workflow models consist of a series of steps of various types. According to the type, these steps can be configured and extended with parameters and scripts to provide the functionality and control you require.
This section covers the standard Workflow steps.
For module specific steps see also:
Step Properties
Each step component has a Step Properties dialog that lets you define and edit the required properties.
Step Properties - Common tab
A combination of the following properties are available for most workflow step components, on the Common tab of the properties dialog:
- TitleThe title for the step.
- DescriptionA description of the step.
- Workflow Stage
- TimeoutThe period after which the step will be "timed out".You can select between: Off , Immediate , 1h , 6h , 12h , 24h .
- Timeout HandlerThe handler which will control the workflow when the step times out; for example:Auto Advancer
- Handler AdvanceSelect this option to automatically advance the workflow to the next step after execution. If not selected, the implementation script must handle workflow advancement.
Step Properties - User/Group tab.
-.
AND Split<<
AND Split - Configuration
- Edit the AND Split properties:
- Split Name : assign a name for explanatory purposes.
- Select the number of branches required; 2, 3, 4 or 5.
- Add workflow steps to the branches as required.
Container Step
A Container step starts another workflow model that runs as a child workflow.
This Container lets you reuse workflow models to implement common sequences of steps. For example a translation workflow model could be used in multiple editing workflows.
Goto Step.
Goto Step - Configuration
To configure the step, edit and use the following tabs:
- Process
- The step to go to : Select the step to execute.
- Script Path : The path to the ECMAScript that determines whether to execute the Goto Step .
- Script : The ECMAScript that determines whether to execute the Goto Step .
Specify either the Script Path or Script . Both options cannot be used at the same time. If you specify values for both properties, the step uses the Script Path .
Simulating a for Loop; } }
OR Split
The OR Split creates a split in the workflow, after which only one branch will be active. This step enables you to introduce conditional processing paths into your workflow. You add workflow steps to each branch as required.
For additional information on creating an OR Split see:
OR Split - Configuration
- Edit the OR Split properties:
There is a separate tab for each branch:
-.
Specify either the Script Path or Script . Both options cannot be used at the same time. If you specify values for both properties, the step uses the Script Path .
- The script of each branch is evaluated one at a time.
- The branches are evaluated left to right.
- The first script that evaluates to true is executed.
- If no branch evaluates to true, then the workflow does not advance.
- Add workflow steps to the branches as required.
Participant Steps and Choosers
Participant Step.
Participant Step - Configuration
To configure the step, edit and use the following tabs:
The workflow initiator is always notified when:
- The workflow is completed (finished).
- The workflow is aborted (terminated).
Some properties need to be configured to enable email notifications. You can also customize the email template or add an email template for a new language. See Configuring Email Notification to configure email notifications in AEM.
Dialog Participant Step.
Dialog Participant Step - Configuration
To configure the step, edit and use the following tabs:
- Dialog
- **Dialog Path**: The path to the dialog node of the dialog you create .
Dialog Participant Step – Creating a dialog
To create a dialog:
- Decide where the resulting data will be stored in the payload .
Dialog Participant Step - Storing Data in the Payload Participant Step - Dialog Definition
- Dialog StructureDialogs for Dialog Participant Steps are similar to dialogs that you create for authoring components. They are stored under:/apps/myapp/workflow/dialogsDialogs for the standard, touch-enabled UI have the following node structure:
newComponent (cq:Component) |- cq:dialog (nt:unstructured) |- content |- layout |- items |- column |- items |- component0 |- component1 |- ...For further information see Creating and Configuring a Dialog .
- Dialog Path PropertyThe node:/apps/myapp/workflows/dialogsFor the touch-enabled UI the following value is used for the Dialog Path property:/apps/myapp/workflow/dialogs/EmailWatch/cq:dialog
- Example Dialog DefinitionThe:
Dynamic Participant Step
The Dynamic Participant Step component is similar to Participant Step with the difference that the participant is selected automatically at run time.
To configure the step, you select a Participant Chooser that identifies the participant to assign the work item to, together with a dialog.
Dynamic Participant Step - Configuration
To configure the step, edit and use the following tabs:
- Participant Chooser
- Participant Chooser : The name of the participant chooser that you create .
- Arguments : Any required arguments.
- Email : Whether an email notification should be sent to the user.
- Dialog
- Dialog Path : The path to the dialog node of the dialog you create (as with the .
Dynamic Participant Step - Developing the participant chooserScriptsmaY serviceServices must implement the com.day.cq.workflow.exec.ParticipantStepChooser interface. The interface defines the following members:.
- SERVICE_PROPERTY_LABEL field: Use this field to specify the name of the participant chooser. The name appears in a list of available participant choosers in the Dynamic Participant Step properties.
- getParticipant method: Returns the dynamically resolved Principal id as a String value.
Dynamic Participant Step - Example Participant Chooser Service
Form Participant Step .
Form Participant Step - Configuration
To configure the step, edit and use the following tabs:
- Form
- Form Path : The path to the form you create .
Form Participant Step - Creating the.
Random Participant Chooser
The Random Participant Chooser step is a participant chooser that assigns the generated work item to a user that is randomly selected from a list.
Random Participant Chooser - Configuration
To configure the step, edit and use the following tabs:
- Arguments
- Participants : Specifies the list of users available for selection. To add a user to the list, click Add Item and type the home path of the user node or the user ID. The order of the users does not affect the likelihood of being assigned a work item.
Workflow Initiator Participant Chooser
The Workflow Initiator Participant Chooser step is a participant chooser that assigns the generated work item to the user who started the workflow. There are no properties to configure other than the Common properties.
Process Step
A Process Step runs an ECMAScript or calls an OSGi service to perform automatic processing.
Process Step - Configuration
To configure the step, edit and use the following tabs:
- a Java Class .
- Handler Advance : Select this option to automatically advance the workflow to the next step after execution. If not selected, the implementation script must handle workflow advancement.
- Arguments : Arguments to be passed to the process. | https://docs.adobe.com/content/help/en/experience-manager-64/developing/extending-aem/extending-workflows/workflows-step-ref.html | 2020-10-20T06:43:42 | CC-MAIN-2020-45 | 1603107869933.16 | [array(['/content/dam/help/experience-manager-64.en/help/sites-developing/assets/wf-26.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-developing/assets/wf-28.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-developing/assets/wf-29.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-developing/assets/wf-31.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-developing/assets/wf-32.png',
None], dtype=object) ] | docs.adobe.com |
Setting Up Social Accounts
In UM Theme you can display the link of up to 6 Social Media Accounts. You can select from a wide range of Social Media Icons (61 Icons) and select them using the WordPress Customizer. After setting up links, it will appear on Header Top bar.
How to Display Social Media Icons
Here are the steps to display social media icons on your website:
- Go to Appearance > Customize > General Styles > Social Accounts
- Select the Icon from the Drop down and insert the link of your social media profile below the drop down.
- After including all your Social Media Profiles, click on Publish.
How To Display Social Media Icons Color
- Go to Appearance > Customize > General Styles > Social Accounts
- Click on Social Icon Color & Social Icon Hover color picker to change the color.
- Click on Publish to save your changes.
| https://docs.ultimatemember.com/article/1366-setting-up-social-accounts | 2020-10-20T06:31:20 | CC-MAIN-2020-45 | 1603107869933.16 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/561c96629033600a7a36d662/images/5bba03e32c7d3a04dd5b5eeb/file-3aCyAoF86l.png',
None], dtype=object) ] | docs.ultimatemember.com |
If Service Action
Description:
This Action marks the beginning of a conditional block of Actions depending on whether a service is running, paused or stopped. It can also be used to determine whether a service is present on the computer or not.
Properties:
If Service:
Choose the state of the service to be checked. You can check whether the specified service is running, paused or stopped as well as whether it is installed or not.
Service Name:
Choose a specific service installed on your computer to be checked. | https://docs.winautomation.com/en/if-service-action.html | 2020-10-20T06:25:56 | CC-MAIN-2020-45 | 1603107869933.16 | [array(['image/15f18201450337.png', 'ifservice.png'], dtype=object)] | docs.winautomation.com |
Editors
When RadPropertyGrid is not in read-only mode users can edit the contents of the selected item. Usually this process starts by pressing Enter or F2 key. All of the following conditions should be met to put an item in edit mode:
The underlying data source supports editing.
RadPropertyGrid control is enabled.
BeginEditMode property value is not BeginEditProgrammatically.
ReadOnly property of the control is set to false.
The item the user wants to edit is enabled.
The ReadOnly property of item the user wants to edit is set to false.
When in edit mode, the user can change the item value and press Enter to commit the change or Esc to revert to the original value. Clicking outside the edited item also commits the change.
You can configure RadPropertyGrid so that items enter edit mode directly when the item is selected or when users press F2 or Enter or click for a second time on the item without triggering a double click event. The BeginEditMode property controls this behavior.
There are a number of build-in ediotrs which are used for editing different data types. These editors can be customized or can be replaced by custom editors all together. Here is a list of the build-in ediotrs and the data types they are used for:
Since R3 2017 PropertyGridSpinEditor supports null values. Detailed information is available here. | https://docs.telerik.com/devtools/winforms/propertygrid/editors/overview | 2018-09-18T17:19:31 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.telerik.com |
Workflow Implementation
In order to implement a workflow, you write a class that implements the desired
@Workflow interface. For instance, the example workflow interface
(
MyWorkflow) can be implemented like so:
public class MyWFImpl implements MyWorkflow { MyActivitiesClient client = new MyActivitiesClientImpl(); @Override public void startMyWF(int a, String b){ Promise<Integer> result = client.activity1(); client.activity2(result); } @Override public void signal1(int a, int b, String c){ //Process signal client.activity2(a + b); } }
The
@Execute method in this class is the entry point of the workflow logic. Since the framework
uses replay to reconstruct
the object state when a decision task is to be processed, a new object is created
for each decision task.
The use of
Promise< as a parameter is
disallowed in the
T>
@Execute method within a
@Workflow interface. This
is done because making an asynchronous call is purely a decision of the caller. The
workflow
implementation itself doesn't depend on whether the invocation was synchronous or
asynchronous. Therefore, the generated client interface has overloads that take
Promise< parameters so that these methods can
be called asynchronously.
T>
The return type of an
@Execute method can only be
void or
Promise<. Note that a return type of the
corresponding external client is
T>
void and not
Promise<>. Since
the external client isn't intended to be used from the asynchronous code, the external
client
doesn't return
Promise objects. For getting results of workflow executions
stated externally, you can design the workflow to update state in an external data
store
through an activity. Amazon SWF's visibility APIs can also be used to retrieve the
result of a
workflow for diagnostic purposes. It isn't recommended that you use the visibility
APIs to
retrieve results of workflow executions as a general practice since these API calls
may get
throttled by Amazon SWF. The visibility APIs require you to identify the workflow
execution using a
WorkflowExecution structure. You can get this structure from the generated
workflow client by calling the
getWorkflowExecution method. This method will
return the
WorkflowExecution structure corresponding to the workflow execution
that the client is bound to. See the Amazon Simple Workflow Service API Reference for more details about the visibility APIs.
When calling activities from your workflow implementation, you should use the generated activities client. Similarly, to send signals, use the generated workflow clients.
Decision Context
The framework provides an ambient context anytime workflow code is executed by the framework. This context provides context-specific functionality that you may access in your workflow implementation, such as creating a timer. See the section on Execution Context for more information.
Exposing Execution State
Amazon SWF allows you to add custom state in the workflow history. The latest state
reported by
the workflow execution is returned to you through visibility calls to the Amazon SWF
service and
in the Amazon SWF console. For example, in an order processing workflow, you may report
the order
status at different stages like 'order received', 'order shipped', and so on. In the
AWS Flow Framework for Java, this is accomplished through a method on your workflow
interface that is annotated
with the
@GetState annotation. When the decider is done processing a decision task, it
calls this method to get the latest state from the workflow implementation. Besides
visibility calls, the state can also be retrieved using the generated external client
(which uses the visibility API calls internally).
The following example demonstrates how to set the execution context.
@Workflow @WorkflowRegistrationOptions(defaultExecutionStartToCloseTimeoutSeconds = 60, defaultTaskStartToCloseTimeoutSeconds = 10) public interface PeriodicWorkflow { @Execute(version = "1.0") void periodicWorkflow(); @GetState String getState(); } @Activities(version = "1.0") @ActivityRegistrationOptions(defaultTaskScheduleToStartTimeoutSeconds = 300, defaultTaskStartToCloseTimeoutSeconds = 3600) public interface PeriodicActivity { void activity1(); } public class PeriodicWorkflowImpl implements PeriodicWorkflow { private DecisionContextProvider contextProvider = new DecisionContextProviderImpl(); private WorkflowClock clock = contextProvider.getDecisionContext().getWorkflowClock(); private PeriodicActivityClient activityClient = new PeriodicActivityClientImpl(); private String state; @Override public void periodicWorkflow() { state = "Just Started"; callPeriodicActivity(0); } @Asynchronous private void callPeriodicActivity(int count, Promise<?>... waitFor) { if(count == 100) { state = "Finished Processing"; return; } // call activity activityClient.activity1(); // Repeat the activity after 1 hour. Promise<Void> timer = clock.createTimer(3600); state = "Waiting for timer to fire. Count = "+count; callPeriodicActivity(count+1, timer); } @Override public String getState() { return state; } } public class PeriodicActivityImpl implements PeriodicActivity { @Override public static void activity1() { ... } }
The generated external client can be used to retrieve the latest state of the workflow execution at any time.
PeriodicWorkflowClientExternal client = new PeriodicWorkflowClientExternalFactoryImpl().getClient(); System.out.println(client.getState());
In the above example, the execution state is reported at various stages. When the
workflow instance starts,
periodicWorkflow reports the initial state as 'Just Started'.
Each call to
callPeriodicActivity then updates the workflow state. Once
activity1 has been
called 100 times, the method returns and the workflow instance completes.
Workflow Locals
Sometimes, you may have a need for the use of static variables in your workflow
implementation. For example, you may want to store a counter that is to be accessed
from
various places (possibly different classes) in the implementation of the workflow.
However,
you can't rely on static variables in your workflows because static variables are
shared
across threads, which is problematic because a worker may process different decision
tasks
on different threads at the same time. Alternatively, you may store such state in
a field on
the workflow implementation, but then you will need to pass the implementation object
around. To address this need, the framework provides a
WorkflowExecutionLocal<?> class. Any state that needs to have static
variable like semantics should be kept as an instance local using
WorkflowExecutionLocal<?>. You can declare and use a static variable of
this type. For example, in the following snippet, a
WorkflowExecutionLocal<String> is used to store a user name.
public class MyWFImpl implements MyWF { public static WorkflowExecutionLocal<String> username = new WorkflowExecutionLocal<String>(); @Override public void start(String username){ this.username.set(username); Processor p = new Processor(); p.updateLastLogin(); p.greetUser(); } public static WorkflowExecutionLocal<String> getUsername() { return username; } public static void setUsername(WorkflowExecutionLocal<String> username) { MyWFImpl.username = username; } } public class Processor { void updateLastLogin(){ UserActivitiesClient c = new UserActivitiesClientImpl(); c.refreshLastLogin(MyWFImpl.getUsername().get()); } void greetUser(){ GreetingActivitiesClient c = new GreetingActivitiesClientImpl(); c.greetUser(MyWFImpl.getUsername().get()); } } | https://docs.aws.amazon.com/amazonswf/latest/awsflowguide/workflowimpl.html | 2018-09-18T17:46:24 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.aws.amazon.com |
Importing Splat Maps
Splat maps are 8-bit monochrome bitmap
.bmp files that contain weight information for each
vertex in a terrain map. Splat maps are generated using a DCC tool such as World Machine's
Splat
Converter.
All splat map operations in Lumberyard are done using the Terrain Texture Layers editor.
To import splat maps
In Lumberyard Editor, choose Tools, Other, Terrain Texture Layers.
In the Terrain Texture Layers editor, under Layer Tasks, assign each splat map to a texture layer by clicking a layer and then clicking Assign Splat Map.
When prompted, select a
.bmpfile. You don't need to assign a splat map path to a layer, but you can't assign more than one path either.
Under Layer Tasks, click Import Splat Maps. This clears the current weight map for the terrain and then rebuilds it using the selected splat maps.
In Lumberyard Editor, choose Game, Terrain, Generate Terrain Texture.
Note
You cannot apply masking to an imported splat map. | https://docs.aws.amazon.com/lumberyard/latest/userguide/terrain-splat-maps.html | 2018-09-18T17:48:08 | CC-MAIN-2018-39 | 1537267155634.45 | [array(['images/terrain-splat-map-2.png', None], dtype=object)
array(['images/terrain-splat-map-1.png', None], dtype=object)] | docs.aws.amazon.com |
API Reference
Rotation
rot
Rotates the image by degrees according to the value specified.
Valid values are in the range
0 –
359, and rotation is counter-clockwise with
0 (the default) at the top of the image. The image will be zoomed after rotation so that it covers the entire specified dimensions (unlike
or, which keeps the image at the same zoom setting and rotates the frame along with the image). | https://docs.imgix.com/apis/url/rotation/rot | 2018-09-18T18:26:50 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.imgix.com |
1. Click the Settings
button on the toolbar to open the Package Settings dialog.
2. These settings are defined by the XCLTBIF template which was used to define this package. Notice that because this package is for a client application it is deployed:
You don't need to review this now but to learn more about these settings you should refer to Options & Settings.
3. Notice that the template includes a setting to Deploy LANSA Communications. This allows the LANSA Communications Administrator to be run for the installed application. This is not essential for most users. An install to a file server would enable most users to be given a shortcut to the main application form only.
4. No changes are required to these settings. | https://docs.lansa.com/14/en/lansa022/content/lansa/vldtoolt_0195.htm | 2018-09-18T18:26:13 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.lansa.com |
Tooltips are added since Telerik Reporting R1 2017. The feature allows you add custom text and title of a box displayed on hover of an elemnt in a Telerik Report. All report viewers provide support for displaying tooltips in a report viewer’s native rendering technology. Tooltips are supported also in PDF files.
Settings in Reports
Each Report section and item has a Tooltip property where you can configure the Text you want to appear on hover. You can also specify a Title for the tooltip's box.
The Series collection of the Graph item also has support for tooltips.
Settings in Viewers
The HTML5 Viewer (JavaScript, MVC and WebForms), WinForms ReportViewer and WPF ReportViewer controls have ToolTipOpening events. In this events you can modify tooltips content:
HTML5 Viewer: viewerToolTipOpening(e, args)
WPF ReportViewer control: ViewerToolTipOpening event.
WinForms ReportViewer control: ViewerToolTipOpening event. | https://docs.telerik.com/reporting/designing-reports-interactivity-tooltips | 2018-09-18T18:11:39 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.telerik.com |
1. Select Version 2 and use the Go to folder
toolbar button to open Explorer at the application package folder (..\x_apps\IIPERSON) in your Visual LANSA install path:
2. Double click on the Version 2 MSI file to run the installation.
Note: Version 2 installs new versions of all forms and reusable parts. Once again it does not involve any database changes, and Version 1 was installed as a per-user installation. This means that for both Windows 7 and Windows 8 you can install Version 2 by double clicking the MSI file.
3. You will observe that installing a new version is similar to the initial install. For example, the End User License Agreement is displayed and must be accepted.
4. Complete all the install steps for Version 2. Version 2 will install as a per-user installation, the same as the option used when Version 1 was installed.
5. Run the application and notice that the main form and the connect form now use the 2007 Blue theme:
6. Log in to the server and re-test the application. You should find that the only change is to use the 2007 Blue theme. | https://docs.lansa.com/14/en/lansa022/content/lansa/vldtoolt_0295.htm | 2018-09-18T18:20:49 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.lansa.com |
You are here:
Summary
Important Observations
Using the Visual LANSA editor, you can interactively debug functions, forms, reusable parts, WAMs and web event functions locally.
You can also remotely debug RDMLX functions when they are running on the server, including web events and WAMs using the Visual LANSA Editor
Programs must be compiled 'debug enabled' in order to use debug.
Tips & Techniques
By default, debug stops at the first executable line of code, controlled by
Debug
settings in the
Editor Options
dialog.
Breakpoints are saved with the source code.
Breakpoint features include variables, breakpoint and call stack tabs.
Variables and component values can be viewed and edited.
Breakpoint properties support break on condition and pass count.
What You Should Know
How to use debug for local debugging. | https://docs.lansa.com/14/en/lansa095/content/lansa/frmeng01_0255.htm | 2018-09-18T17:32:44 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.lansa.com |
Configure a computer to develop Office solutions
To create VSTO Add-ins and customizations for Microsoft Office, install a supported version of Visual Studio, the .NET Framework, and Microsoft Office.
For detailed installation steps, see How to: Configure a computer to develop Office solutions.
If project templates don't appear or they don't work in Visual installed automatically along with Visual Studio. If you customize the Visual Studio installation by specifying which features to install, make sure that you choose Microsoft Office Developer Tools during setup to install the tools.
To make sure that these tools are installed, start the Visual Studio setup program, and choose the Modify button. Select the Microsoft Office Developer Tools check box, and then choose the Update button.
Make sure that you're not running a version of Office that was delivered by Click-to-Run. See How to: Verify whether Outlook is a Click-to-Run application on a computer.
Ensure that you're running only one version of Microsoft Office.
If you continue to experience problems, see Additional support for errors in Office solutions.
See also
How to: Configure a computer to develop Office solutions
How to: Install the Visual Studio Tools for Office runtime redistributable
How to: Install Office primary interop assemblies
Features available by Office application and project type | https://docs.microsoft.com/en-us/visualstudio/vsto/configuring-a-computer-to-develop-office-solutions?view=vs-2017 | 2018-09-18T17:58:31 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.microsoft.com |
Homematic Binding
This is the binding for the eQ-3 Homematic Solution. This binding allows you to integrate, view, control and configure all Homematic devices in the openHAB environment.
Supported
All devices connected to a Homematic gateway. All required openHAB metadata are generated during device discovery. With Homegear or a CCU, variables and scripts are supported too.
Discovery
Gateway discovery is only available for Homegear, you need at least 0.6.x for gateway discovery..
Bridge)
socketMaxAlive
The maximum lifetime of a pooled socket connection to)
The syntax for a bridge is:
homematic:bridge:NAME
- homematic the binding id, fixed
- bridge the type, fixed
- name the name of the bridge
Example
- openHab. a underscore.
Itemss
The binding supports one virtual device and some virtual datapoints. Virtual datapoints are generated by the binding and provides special functionality.
GATEWAY
A virtual datapoint (Switch) to reload all values for all devices, available in channel 0 in GATEWAY-EXTRAS
RELOAD_RSSI
A virtual datapoint (Switch) to reload all rssi values for all devices, available in channel 0 in GATEWAY-EXTRAS
RSSI
A virtual datapoint (Number) with the unified RSSI value from RSSI_DEVICE and RSSI_PEER, available in channel 0 for all wireless devices
INSTALL_MODE
A virtual datapoint (Switch) to start the install mode on the gateway, available in channel 0 in GATEWAY-EXTRAS
INSTALL_MODE_DURATION
A virtual datapoint (Integer) to hold the duration for the install mode, available in channel 0 in GATEWAY-EXTRAS (max 300 seconds, default = 60)
DELETE_MODE
A virtual datapoint (Switch) to remove the device from the gateway, available in channel 0 for each device. Deleting a device is only possible if DELETE_DEVICE_MODE is not LOCKED
DELETE
A virtual datapoint (Number) to automatically set the ON_TIME datapoint before the STATE or LEVEL datapoint is sent to the gateway, available for all devices which supports the ON_TIME datapoint.
This is usefull to automatically turn off the datapoint after the specified time.
DISPLAY
Adds multiple virtual datapoints to the HM-Dis-WM55 device to easily send colored text and icons to the display
Example:
-1 Failure
A device may return this failure while fetching the datapoint values.
I, RefreshType.REFRESH)
Note: adding new and removing deleted variables from the GATEWAY-EXTRAS Thing is currently not supported. You have to delete the Thing, start a scan and add it again.
Debugging, i need a full startup TRACE log
stop org.openhab.binding.homematic log:set TRACE org.openhab.binding.homematic start org.openhab.binding.homematic | https://docs.openhab.org/v2.2/addons/bindings/homematic/readme.html | 2018-09-18T17:46:38 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.openhab.org |
Adding a New Drawing
If you need to add a new drawing to a scene, you can do so by duplicating the existing drawing on the current frame. If you're using symbols, you can also add a new drawing inside the body part’s symbol.
Duplicating a Drawing
If your character doesn't use symbols, you must duplicate the drawing on the cell on which you need to a new drawing. You could also create a new blank drawing, but duplicating the existing drawing allows you to keep the pivot you previously set (if you're using drawing pivots) and you can also reuse a portion of the existing artwork. When you create a new drawing, you get a blank cell with a pivot set at the centre of the Camera view.
- In the Timeline or Camera view, select the drawing to duplicate.
- From the top menu, select Drawing > Duplicate Drawing or press Alt + Shift + D.
The new drawing appears in the currently selected cell.
- In the Camera view, draw the new piece. | https://docs.toonboom.com/help/harmony-12/advanced-network/Content/_CORE/_Workflow/022_Cut-out_Animation/062_H1_Adding_a_New_Drawing.html | 2018-09-18T17:22:29 | CC-MAIN-2018-39 | 1537267155634.45 | [array(['../../../Resources/Images/_ICONS/Home_Icon.png', None],
dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePremium.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stageAdvanced.png',
'Toon Boom Harmony 12 Stage Advanced Online Documentation'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stageEssentials.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/controlcenter.png',
'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/scan.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/Activation.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/_ICONS/download.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Cut-out/HAR12/HAR12_dupe0.png',
None], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Cut-out/HAR12/HAR12_dupe1.png',
None], dtype=object) ] | docs.toonboom.com |
Drawing with the Polyline Tool
You can use the Polyline tool to draw long and curvy lines.
NOTE: To learn more about the Polyline tool options, see Polyline Tool Properties.
Select a drawing from the Drawing Thumbnails panel.
- In the Tools toolbar, select the Polyline
tool or press Alt + _.
- In the Drawing view, click and drag to create a point and a Bezier handle to shape your line.
- Click a new area and drag to create a second point and Bezier handle.
- Press Alt to pull only one handle, instead of two.
- Press Shift to snap the handles to 45, 90, or 180 degrees.
- with the Contour Editor Tool. | https://docs.toonboom.com/help/harmony-14/paint/drawing/draw-polyline-tool.html | 2018-09-18T17:15:27 | CC-MAIN-2018-39 | 1537267155634.45 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/HAR/Stage/Drawing/an_polyline_01.png', None],
dtype=object)
array(['../Resources/Images/HAR/Stage/Drawing/an_polyline_02.png', None],
dtype=object) ] | docs.toonboom.com |
Advertising on Read the Docs Community Sites¶.
Sustainability¶.!! | http://blog.readthedocs.com/ads-on-read-the-docs/ | 2018-09-18T18:21:19 | CC-MAIN-2018-39 | 1537267155634.45 | [] | blog.readthedocs.com |
...
The plugin enables analysis of ActionScript projects within SonarQube.
It is compatible with the Issues Report plugin to run pre-commit local analysis.
...
Release Notes
Version 2.0
It is no longer possible to:
- Let SonarQube drive the execution of unit tests.
- Import unit test execution reports (feel free to vote for to reintroduce this feature). | http://docs.codehaus.org/pages/diffpages.action?pageId=136675686&originalId=233052337 | 2014-04-16T10:46:53 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.codehaus.org |
:
<!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" "">
All tags and elements declared inside jetty.xml will be pointing to this resource, configure.dtd. This only means tags and/or elements will be readable based from this data type file.
<Configure id="Server" class="org.mortbay.jetty.Server">
This is the first instance called when you run the server..
<Set name="ThreadPool">
<New class="org.mortbay.thread.BoundedThreadPool">
<Set name="minThreads">10</Set>
<Set name="maxThreads">100</Set>
</New>
</Set>.
<Set name="connectors">
<Array type="org.mortbay.jetty.Connector">
</Set>. | http://docs.codehaus.org/pages/viewpage.action?pageId=48589 | 2014-04-16T10:59:49 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.codehaus.org |
COM/MP1/ALJ/MEG/hkr DRAFT Agenda ID #5280
Ratesetting
Decision DRAFT DECISION OF COMMISSIONER PEEVEY
(Mailed 1/13/2006)
BEFORE THE PUBLIC UTILITIES COMMISSION OF THE STATE OF CALIFORNIA
OPINION ON PROCUREMENT INCENTIVES FRAMEWORK
TABLE OF CONTENTS
Title Page
OPINION ON PROCUREMENT INCENTIVES FRAMEWORK 22
1. Summary 22
2. Background 55
3. Threshold Policy Issues 1010
3.1. Role of a Greenhouse Gas (GHG) Cap in Procurement Policies 1111
3.1.1. Positions of Parties 1111
3.2. Type of Cap on GHG Emissions 1616
3.2.1. Positions of Parties 1616
3.3. Applicability of Cap 1818
3.3.1. Positions of Parties 1818
3.4. Role of Financial Incentives 2121
3.4.1. Positions of Parties 2121
3.5. Interaction of GHG Cap and Financial Incentives 2626
3.5.1. Positions of Parties 2727
4. Implementation Issues 2828
4.1. GHG Emissions Baselines and Adjustments
to Reduction Requirements Over Time 2929
4.1.1. Positions of Parties 2929
4.2. Allocation of GHG Allowances 3434
4.2.1. Positions of Parties 3434
4.3. Flexible Compliance 3636
4.3.1. Positions of Parties 3636
4.3.1.3. Banking and Borrowing 3838
4.4.1. Positions of Parties 3939
4.5. Emissions Registration 4040
4.5.1. Positions of Parties 4040
4.6. Continuation of GHG/Carbon Adder 4242
4.6.1. Positions of Parties 4242
4.7. Treatment of GHG Emissions From Natural Gas
for Purposes Other Than Electricity Generation 4343
5. Other Issues 4444
6. Next Steps 4545
7. Comments on Draft Decision 4545
8. Assignment of Proceeding 4646
Findings of Fact 4646
Conclusions of Law 5252
ORDER 5555
ATTACHMENT 1
OPINION ON PROCUREMENT INCENTIVES FRAMEWORK
Today we state our intent to develop a load-based cap on greenhouse gas (GHG) emissions for Pacific Gas and Electric Company (PG&E), San Diego Gas & Electric Company (SDG&E), Southern California Edison Company (SCE), and non-utility load serving entities (LSEs) that provide electric power to customers within these respondents' service territories. Over the longer term, we also intend to develop a GHG limitation program that includes emissions from the natural gas sector, as the requisite emission reporting and certification protocols become available.
As discussed in this decision, we will establish a baseline for the GHG emissions cap on a historical year basis, with 1990 as our preferred reference year. Our final determination on this matter will await further consideration of implementation issues associated with using this particular year as the reference, including the availability of adequate historical emissions data for the investor-owned utilities (IOUs) and other LSEs. We also leave to the implementation phase our consideration of the appropriate level of emissions reductions (and associated caps) over time, relative to the base year.
We intend to create a load-based GHG emissions cap that is compatible with any other GHG cap-and-trade regime that may be developed in the future, either in the Western Region, nationally, or internationally. Therefore, the GHG emissions allowances associated with our load-based cap will be in the form of "tons of carbon-dioxide equivalent." Based on the record in this proceeding, our preference is to administratively allocate these allowances, rather than auction them. At least initially, we intend to limit the use of offsets to actions directly related to utility activities (e.g., diesel pump electrification) and to activities occurring within California. We may consider a broader use of offsets in the future after measurement and verification protocols for national and/or international mechanisms are better developed.
For the reasons discussed in this decision, we will limit the trading of offsets. Because we wish to encourage early action on GHG emissions reductions, we are not inclined to allow borrowing of allowances (from future year allocations) during the initial stages of implementation. We intend, however, to allow some banking of emissions allowances for a period of three years. We conclude that some form of penalty structure for non-compliance is necessary, or else the GHG reduction requirements will only be voluntary. At this juncture, we prefer structuring penalties as alternative compliance payments, but will further explore the nature of an appropriate penalty mechanism during the implementation phase.
In conjunction with a load-based emissions cap on electric procurement, we will pursue the development of shareholder incentives in resource-specific proceedings, with our immediate focus on energy efficiency. As discussed in this decision, we will also explore the concept of allowance sale incentives during the implementation phase. Under this mechanism, the Commission would certify GHG emission allowances based on superior performance, as defined by the Commission, for sale by the utilities outside of California to the benefit of their shareholders.
We delegate to the Assigned Commissioner and Administrative Law Judge (ALJ) the scoping of the implementation steps necessary to implement our policy decision today for adoption in a future decision in this proceeding or a successor proceeding. Those implementation steps include, but are not limited to: (1) quantifying the GHG emissions baseline for each LSE, (2) adjusting GHG emission reduction requirements over time, relative to the baseline, (3) adopting and administering a process for allocating emissions allowances, and (4) developing flexible compliance mechanisms with appropriate performance penalties.
In the meantime, we require LSEs, when they file their 2006 procurement plans, to include information about existing GHG emissions profiles and the future GHG emissions implications of their procurement plans.
We also direct PG&E, SDG&E, and SCE to include a provision in any power purchase agreement for non-renewable energy that requires the supplier to register and report their GHG emissions with the California Climate Action Registry (CCAR). CCAR is a non-profit public/private partnership that serves as a voluntary GHG registry of participating companies' emission profiles. Participating power generators and electric utilities account for and report GHG emission inventories according to the CCAR's reporting protocols. PG&E, SCE, and SDG&E are already voluntary members of CCAR.
As discussed in this decision, we fully intend to continue to collaborate with Governor Schwarzenegger's Climate Action Team and to coordinate today's adopted policies with the administration's GHG reduction policies and goals. In particular, we will continue to work with the Governor's Climate Action Team to ensure that municipal utilities are also subject to a GHG emissions reduction regime that will assist California in meeting the aggressive GHG reduction goals articulated in Executive Order S-3-05.
We also note that, with this decision, we are joining in the pioneering efforts on greenhouse gas regulation started in the Northeast and Mid-Atlantic states with the voluntary Regional Greenhouse Gas Initiative there. We hope that these parallel but distinct efforts on both coasts will help move the ball forward on initiatives to reduce greenhouse gas emissions and mitigate global climate change in the United States and around the world.
1 Attachment 1 describes the abbreviations and acronyms used in this decision. | http://docs.cpuc.ca.gov/Published/COMMENT_DECISION/52819.htm | 2014-04-16T10:41:33 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.cpuc.ca.gov |
.
This decorator is used to ensure.
Decorator to require that a view only accept the GET method.
Decorator to require that a view only accept the POST method.
Conditional view processing¶
The following decorators in django.views.decorators.http can be used to control caching behavior on particular views.
These decorators can be used to generate ETag and Last-Modified headers; see conditional view processing.
GZip compression¶.
Vary headers¶
The decorators in django.views.decorators.vary can be used to control caching based on specific request headers.
The Vary header defines which request headers a cache mechanism should take into account when building its cache key.
See using vary headers.. | https://docs.djangoproject.com/en/1.3/topics/http/decorators/ | 2014-04-16T10:11:54 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.djangoproject.com |
Show me the code!¶
All right, all right, already. Here you go. Below is a functional setup of MassTransit.
So what is all of this doing?¶
If we are going to create a messaging system, we need to create a message. YourMessage is a .Net class that will represent our message. Notice that it’s just a plain C# class (or POCO).
Next up, we need a program to run our code. Here we have a standard issue command line Main method. To setup the bus we start with the static class Bus and its Initialize method. This method takes a lambda whose first and only argument is a class that will let you configure every aspect of the bus.
One of your first decisions is going to be “What transport do I want to run on?” Here we have choosen MSMQ (sbc.UseMsmq()) because its easy to install on a Windows machines (sbc.VerifyMsmqConfiguration()), will do just that and its most likely what you will use.
After that we have the sbc.UseMulticastSubscriptionClient() this tells the bus to pass subscription information around using PGM over MSMQ giving us a way to talk to all of the other bus instances on the network. This eliminates the need for a central control point.
Now we have the sbc.ReceiveFrom("msmq://localhost/test_queue) line which tells us to listen for new messages at the local, private, msmq queue ‘test_queue’. So anytime a message is placed into that queue the framework will process the message and deliver it to any consumers subscribed on the bus.
Lastly, in the configuration, we have the Subscribe lambda, where we have configured a single Handler for our message which will be activated which each message of type YourMessage and will print to the console.
And now we have a bus that is configured and can do things. So now we can grab the singleton instance of the service bus and call the Publish method on it. | http://docs.masstransit-project.com/en/latest/configuration/quickstart.html | 2014-04-16T10:11:59 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.masstransit-project.com |
Author: jboner jboner
Introduction Hello World application for this tutorial.
It is important to notice that the HelloWorld application is changed some to make this tutorial more relevant.
The HelloWorld2.greet() method now returns a greeting message while in the previous tutorial, the HelloWorld.greet() was simply printing out the message directly.
It is a standard Java application and can be compiled with
javac -d target HelloWorld2.java.
However this application is indeed a bit boring and we are all pretty tired of it. So let's try to make it a bit more interesting, let's hijack the
HelloWorld22 we are trying to illustrate here is that:
- the
HelloWorld2application is completely oblivious of the fact that it is actually yelling out the greeting
- the
yellifieradvice implementation is generic and can be used to yell out output captured from any method (that does not return void).
However, you might have noticed that the definition (in the aspect annotation) is not generic but coupled to the
HelloWorld2 application. We can easily loosen up this strong coupling and make the aspect completely reusable. For more information about that, read the next tutorial.
5 Comments
Hanson Char
To compile, method signature of yellifyier needs to declare the thowing of Throwable.
Anonymous
For AnnotationC compiler to work properly with aspectwerkz-2.0.RC1 I made
following two changes.
1> Added ant-1.5.2.jar in classpath.
2> used -src and -classes from command line
So, on my windows system AnnotationC command looks something like this
java -cp %ASPECTWERKZ_HOME%\lib\aspectwerkz-2.0.RC1.jar;%ASPECTWERKZ_HOME%\lib\ant-1.5.2.jar org.codehaus.aspectwerkz.annotation.AnnotationC -src . -classes target
Anonymous
If classes are not compiled with -g option weaver throws following errors
WARNING: unable to register advice: Could not access source information for meth
od testAOP.HelloWorldHijacker.yellifyier(Lorg/codehaus/aspectwerkz/joinpoint/Joi
nPoint;)Ljava/lang/Object;. Compile aspects with javac -g.
Hello World!
Do I need to compile classes with -g option ?
Anonymous
if instead of annotations can we do it with xml?
i tried doing it with xml..but the weaving was not done..
i think there must be some problem with the syntax ....can u look into the matter and reply with the correct xml file.
This is the file i made..but the results was not correct.
<aspectwerkz>
<system id="AspectWerkzExample">
<package name="testAOP">
<aspect class="HelloWorldHijacker"/>
<pointcut name="greetMethod" expression="execution(* testAOP.HelloWorld2.greet(..))"/>
<advice name="aroundgreeting" type="around" bind-
</package>
</system>
</aspectwerkz>
can anyone help me.............piyush
beto siless
Hi, the correct XML would be
<aspectwerkz>
<system id="AspectWerkzExample">
<package name="testAOP">
<aspect class="HelloWorldHijacker">
<pointcut name="greetMethod" expression="execution(* testAOP.HelloWorld2.greet(..))"/>
<advice name="yellyfier" type="around" bind-
</aspect>
</package>
</system>
</aspectwerkz>
The advice name must match the method in the aspect clas | http://docs.codehaus.org/display/AW/Hijacking+Hello+World?focusedCommentId=14873 | 2014-04-16T10:50:59 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.codehaus.org |
system ... Starting to download channel.xml (811 bytes) ...done: 811 bytes Auto-discovered channel "pear.symfony.com", alias "symfony2", adding to registr Did not download optional dependencies: phpunit/PHP_Invoker, use --alldeps to d wnload automatically ... ....
if ($second >= 15) $this->fail("timeout");. | http://docs.joomla.org/index.php?title=Running_Automated_Tests_for_the_Joomla_CMS&diff=77301&oldid=77297 | 2014-04-16T10:39:23 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.joomla.org |
If you prefer to download a copy of a Quick Start Guide, you can do so here Joomla! v 1.5 Quick Start Guide Scribd, written by Kevin Hayne. As you read the guide, walk through the video accompaniment created by Michael Casha. This is a well developed and efficient training guide and video set.
If you prefer to follow information online via this wiki, then please continue.
This beginners guide is divided into manageable sections to walk you through pretty much everything you will need to know to get your first installation of Joomla! up and running, and more importantly, understand the basic functioning of the system to allow you to do some basic troubleshooting..:! | http://docs.joomla.org/index.php?title=User:RCheesley/BeginnersIntroduction&oldid=99899 | 2014-04-16T11:53:37 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.joomla.org |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.