content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Provider Development
Provider Development
Information about writing providers to provide implementation for types.
About
The core of Puppet’s cross-platform support is via Resource Providers, which are essentially back-ends that implement support for a specific implementation of a given resource type..:
Puppet::Type.type(:package).provide :apt, :parent => :dpkg, :source => :dpkg do ... end
Note that we’re also specifying that this provider uses the dpkg.
Suitability
The first question to ask about a new provider is where it will be functional, which Puppet describes as suitable. Unsuitable providers cannot be used to do any work, although as of Puppet 2.7.8.,.
However, because things are rarely so simple, Puppet attempts to help in a few ways.). | http://docs.puppetlabs.com/guides/provider_development.html | 2014-03-07T12:59:15 | CC-MAIN-2014-10 | 1393999642519 | [] | docs.puppetlabs.com |
User Guide
Local Navigation
The® smartphone.. | http://docs.blackberry.com/en/smartphone_users/deliverables/33213/The_time_on_my_device_is_incorrect_61_1578799_11.jsp | 2014-03-07T13:00:32 | CC-MAIN-2014-10 | 1393999642519 | [] | docs.blackberry.com |
Help Center
Local Navigation
- Quick Help
- Shortcuts
- Phone
- Phone basics
- Find your phone number
- Make a call
- Answer a call
- Answer a second call
- Change your ring tone, notifiers, or reminders
- Mute a call
- Place a call on hold
- Turn on the speakerphone
- Turn on the speakerphone
- Dial an extension
- Dial an extension
- Dial using numbers or letters
- Switch applications during a call
-
Create a phone number link for a conference call
Previous topic: Make a conference call
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/25417/Create_a_phone_number_link_for_a_conference_call_29318_11.jsp | 2014-03-07T13:01:35 | CC-MAIN-2014-10 | 1393999642519 | [] | docs.blackberry.com |
Multi-modules projects
Imagine you are working on a project based upon a traditional 3 layers logic
Few! known as modules.. Your project should have the following structure at the end of this lesson :
Let's create your multi-modules project. First, create a new directory named after your project, something like multimoduleproject. three children pom.xml files :
| http://docs.codehaus.org/pages/viewpage.action?pageId=49431 | 2014-03-07T13:00:29 | CC-MAIN-2014-10 | 1393999642519 | [array(['/download/attachments/48635/multi-projects.jpg?version=2&modificationDate=1144119802585&api=v2',
None], dtype=object)
array(['/download/attachments/48635/multi-modules%20project.jpg?version=3&modificationDate=1144120092516&api=v2',
None], dtype=object)
array(['/download/attachments/48635/multi-modules-pom.jpg?version=10&modificationDate=1144120627742&api=v2',
None], dtype=object)
array(['/download/attachments/48635/directory-layout.jpg?version=1&modificationDate=1144119712643&api=v2',
None], dtype=object) ] | docs.codehaus.org |
Genesys Text and Speech Analytics for Customer Service (EE24) for Genesys Engage cloud.
Contents
- 1 What's the challenge?
- 2 What's the solution?
- 3 Use Case Overview
- 4 Use Case Definition
- 5 User Interface & Reporting
- 6 Assumptions
- 7 Related Documentation
Use Case Overview.
Business and Distribution Logic
Business Logic
See the User Manual for search and discovery functionality.
Distribution Logic
N/A
User Interface & Reporting
Customer Interface Requirements prerequisite for this use case on PureConnect is Genesys Speech Analytics (EE22)
UConnector for PureConnect is required to utilize Genesys Intelligence Analytics on PureConnect
Languages
Languages currently available include: English, Spanish, German, French, Brazilian Portuguese, Italian, Korean, Japanese, Mandarin, Arabic, Turkish.
Languages in development or on the roadmap include: Cantonese, Dutch, Canadian French.
Check with product team for specific dialects and planned dates for new languages.
Customer Assumptions
Interdependencies
All required, alternate, and optional use cases are listed here, as well as any exceptions.
On-premises Assumptions
Available for Genesys Engage on-premises customers for use with Genesys Interaction Recording and 3rd-party recording solutions.
Available for PureConnect Premise customers,via the "UConnector for PureConnect" which is a custom Professional Services asset.
Cloud Assumptions
Text analytics is not available in Cloud,except for PureConnect Cloud customers.
Related Documentation
Document Version
- v 1.1.4 | https://all.docs.genesys.com/UseCases/Current/GenesysEngage-cloud/EE24 | 2021-01-16T02:58:39 | CC-MAIN-2021-04 | 1610703499999.6 | [] | all.docs.genesys.com |
after delivering each hint. The interval shrinks proportionally to the number of nodes in the cluster. For example, if there are two nodes in the cluster, each delivery thread uses the maximum interval; if there are three nodes, each node throttles to half of the maximum interval, because the two nodes are expected to deliver hints simultaneously.
Synopsis
nodetool [connection_options] sethintedhandoffthrottlekb [--] value_in_KB/sec.
- value_in | https://docs.datastax.com/en/dse/6.0/dse-admin/datastax_enterprise/tools/nodetool/toolsSetHintedHandoffThrottleKb.html | 2021-01-16T03:54:32 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.datastax.com |
3. Vector Data¶
3.1. Yleiskatsaus¶
Vector). Each one of these things would be a feature when we represent them in a GIS Application. Vector features have attributes, which consist of text or numerical information that describe the features.
A vector feature has its shape represented using geometry. The geometry is
made up of one or more interconnected vertices. A vertex describes a position
in space using an X, Y and optionally Z axis. Geometries which have
vertices with a
Z axis are often referred to as 2.5D since they describe
height or depth at each vertex, but not both.
When a feature’s geometry consists of only a single vertex, it is referred to as a point feature (see illustration).
Fig. 3.2 A point feature is described by its X, Y and optionally Z coordinate. The point attributes describe the point e.g. if it is a tree or a lamp post.¶
Fig. 3.3 A polyline is a sequence of joined vertices. Each vertex has an X, Y (and optionally Z) coordinate. Attributes describe the polyline.¶
Fig. 3.4 figure_geometry_landscape).
3.2. Point features in detail¶
The first thing we need to realise when talking about point features is that what we describe as a point in GIS is a matter of opinion, and often dependent on scale. let’s look at cities for example. If you have a small scale map (which covers a large area), it may make sense to represent a city using a point feature. However as you zoom in to the map, moving towards a larger scale, it makes more sense to show the city limits as a polygon.
When you choose to use points to represent a feature is mostly a matter of scale (how far away are you from the feature), convenience (it takes less time and effort to create point features than polygon features), and the type of feature (some things like telephone poles just don’t make sense to be stored as polygons).
As we show in illustration figure_geometry_point, a point feature has an X, Y and optionally, Z value. The X and Y values will depend on the Coordinate Reference System (CRS) being used. We are going to go into more detail about Coordinate Reference Systems in a later tutorial. For now let’s simply say that a CRS is a way to accurately describe where a particular place is on the earth’s surface. One of the most common reference systems is Longitude and Latitude. Lines of Longitude run from the North Pole to the South Pole. Lines of Latitude run from the East to West. You can describe precisely where you are at any place on the earth by giving someone your Longitude (X) and Latitude (Y). If you make a similar measurement for a tree or a telephone pole and marked it on a map, you will have created a point feature.
Since we know the earth is not flat, it is often useful to add a Z value to a point feature. This describes how high above sea level you are.
3.3. Polyline features in detail¶
Where a point feature is a single vertex, a polyline has two or more vertices. The polyline is a continuous path drawn through each vertex, as shown in figure_geometry_polyline. When two vertices are joined, a line is created. When more than two are joined, they form a ’line of lines’, or polyline.
A polyline is used to show the geometry of linear features such as roads, rivers, contours, footpaths, flight paths and so on. Sometimes we have special rules for polylines in addition to their basic geometry. For example contour lines may touch (e.g. at a cliff face) but should never cross over each other. Similarly, polylines used to store a road network should be connected at intersections. In some GIS applications you can set these special rules for a feature type (e.g. roads) and the GIS will ensure that these polylines always comply to these rules.
If a curved polyline has very large distances between vertices, it may appear angular or jagged, depending on the scale at which it is viewed (see figure_polyline_jagged). Because of this it is important that polylines are digitised (captured into the computer) with distances between vertices that are small enough for the scale at which you want to use the data.
Fig. 3.6 Polylines viewed at a smaller scale (1:20 000 to the left) may appear smooth and curved. When zoomed in to a larger scale (1:500 to the right) polylines may look very angular.¶
The attributes of a polyline describe its properties or characteristics. For example a road polyline may have attributes that describe whether it is surfaced with gravel or tar, how many lanes it has, whether it is a one way street, and so on. The GIS can use these attributes to symbolise the polyline feature with a suitable colour or line style.
3.4. Polygon features in detail¶
Polygon features are enclosed areas like dams, islands, country boundaries and so on. Like polyline features, polygons are created from a series of vertices that are connected with a continuous line. However because a polygon always describes an enclosed area, the first and last vertices should always be at the same place! Polygons often have shared geometry –– boundaries that are in common with a neighbouring polygon. Many GIS applications have the capability to ensure that the boundaries of neighbouring polygons exactly coincide. We will explore this in the Topology topic later in this tutorial.
As with points and polylines, polygons have attributes. The attributes describe each polygon. For example a dam may have attributes for depth and water quality.
3.5. Vector data in layers¶
Now that we have described what vector data is, let’s look at how vector data is managed and used in a GIS environment. Most GIS applications group vector features into layers. Features in a layer have the same geometry type (e.g. they will all be points) and the same kinds of attributes (e.g. information about what species a tree is for a trees layer). For example if you have recorded the positions of all the footpaths in your school, they will usually be stored together on the computer hard disk and shown in the GIS as a single layer. This is convenient because it allows you to hide or show all of the features for that layer in your GIS application with a single mouse click.
3.6. Editing vector data¶
The GIS application will allow you to create and modify the geometry data in a layer –– a process called digitising –– which we will look at more closely in a later tutorial. If a layer contains polygons (e.g. farm dams), the GIS application will only allow you to create new polygons in that layer. Similarly if you want to change the shape of a feature, the application will only allow you to do it if the changed shape is correct. For example it won’t allow you to edit a line in such a way that it has only one vertex –– remember in our discussion of lines above that all lines must have at least two vertices.
Creating and editing vector data is an important function of a GIS since it is one of the main ways in which you can create personal data for things you are interested in. Say for example you are monitoring pollution in a river. You could use the GIS to digitise all outfalls for storm water drains (as point features). You could also digitise the river itself (as a polyline feature). Finally you could take readings of pH levels along the course of the river and digitise the places where you made these readings (as a point layer).
As well as creating your own data, there is a lot of free vector data that you can obtain and use. For example, you can obtain vector data that appears on the 1:50 000 map sheets from the Chief Directorate: Surveys and Mapping.
3.7. Scale and vector data¶
Map scale is an important issue to consider when working with vector data in a GIS. When data is captured, it is usually digitised from existing maps, or by taking information from surveyor records and global positioning system devices. Maps have different scales, so if you import vector data from a map into a GIS environment (for example by digitising paper maps), the digital vector data will have the same scale issues as the original map. This effect can be seen in illustrations figure_vector_small_scale and figure_vector_large_scale. Many issues can arise from making a poor choice of map scale. For example using the vector data in illustration figure_vector_small_scale to plan a wetland conservation area could result in important parts of the wetland being left out of the reserve! On the other hand if you are trying to create a regional map, using data captured at 1:1000 000 might be just fine and will save you a lot of time and effort capturing the data.
3.8. Symbology¶
When you add vector layers to the map view in a GIS application, they will be drawn with random colours and basic symbols. One of the great advantages of using a GIS is that you can create personalised maps very easily. The GIS program will let you choose colours to suite the feature type (e.g. you can tell it to draw a water bodies vector layer in blue). The GIS will also let you adjust the symbol used. So if you have a trees point layer, you can show each tree position with a small picture of a tree, rather than the basic circle marker that the GIS uses when you first load the layer (see illustrations figure_vector_symbology, figure_generic_symbology and figure_custom_symbology).
Fig. 3.9 In the GIS, you can use a panel (like the one above) to adjust how features in your layer should be drawn.¶
Fig. 3.10
At the simplest level we can use vector data in a GIS Application in much the same way you would use a normal topographic map. The real power of GIS starts to show itself when you start to ask questions like ’which houses are within the 100 year flood level of a river?’; ’where is the best place to put a hospital so that it is easily accessible to as many people as possible?’; ’which learners live in a particular suburb?’. A GIS is a great tool for answering these types of questions with the help of vector data. Generally we refer to the process of answering these types of questions as spatial analysis. In later topics of this tutorial we will look at spatial analysis in more detail.
3.10. Common problems with vector data¶
Working with vector data does have some problems. We already mentioned the issues that can arise with vectors captured at different scales. Vector data also needs a lot of work and maintenance to ensure that it is accurate and reliable. Inaccurate vector data can occur when the instruments used to capture the data are not properly set up, when the people capturing the data aren’t being careful, when time or money don’t allow for enough detail in the collection process, and so on.
If you have poor quality vector data, you can often detect this when viewing the data in a GIS. For example slivers can occur when the edges of two polygon areas don’t meet properly (see figure_vector_slivers).
Fig. 3.12 Slivers occur when the vertices of two polygons do not match up on their borders. At a small scale (e.g. 1 on left) you may not be able to see these errors. At a large scale they are visible as thin strips between two polygons (2 on right).¶.
Fig. 3.13. Mitä olemme oppineet figure_vector_summary.
3.12. Nyt on sinun vuorosi yrittää!¶
Here are some ideas for you to try with your learners:
Using a copy of a toposheet map for your local area (like the one shown in figure_sample_map),. | https://docs.qgis.org/3.10/fi/docs/gentle_gis_introduction/vector_data.html | 2021-01-16T03:34:19 | CC-MAIN-2021-04 | 1610703499999.6 | [array(['../../_images/point_feature.png',
'../../_images/point_feature.png'], dtype=object)
array(['../../_images/polyline_feature.png',
'../../_images/polyline_feature.png'], dtype=object)
array(['../../_images/polygon_feature.png',
'../../_images/polygon_feature.png'], dtype=object)
array(['../../_images/jagged_polyline.png',
'../../_images/jagged_polyline.png'], dtype=object)
array(['../../_images/symbology_settings.png',
'../../_images/symbology_settings.png'], dtype=object)
array(['../../_images/symbology_generic.png',
'../../_images/symbology_generic.png'], dtype=object)
array(['../../_images/vector_slivers.png',
'../../_images/vector_slivers.png'], dtype=object)
array(['../../_images/vector_overshoots.png',
'../../_images/vector_overshoots.png'], dtype=object)] | docs.qgis.org |
Contents
- Further information
Niagara
System architecture
- Total of 60,000 Intel x86-64 cores.
- 1,500 Lenovo SD530 nodes
2x Intel Skylake 6148 CPUs (40 cores @2.4GHz per node).
(with hyperthreading to 80 threads & AVX512)
3.02 PFlops delivered / 4.6 PFlops theoretical.
(would've been #42 on the TOP500 in Nov'18)
188 GiB / 202 GB RAM per node.(at least 4 GiB/core for user jobs)
- Operating system: Linux (CentOS 7).
- Interconnect: EDR InfiniBand, Dragonfly+ topology with Adaptive Routing
No GPUs, no local disk.
Replaces the General Purpose Cluster (GPC) and Tightly Coupled System (TCS).
1:1 up to 432 nodes, effectively 2:1 beyond that...
Using Niagara: Logging in
As with all SciNet and CC.
Storage Systems and Locations
Home and scratch
You have a home and scratch directory on the system, whose locations will be given by
$HOME=/home/g/groupname/myccusername
$SCRATCH=/scratch/g/groupname/myccusername
nia-login07:~$ pwd /home/s/scinet/rzon nia-login07:~$ cd $SCRATCH nia-login07:rzon$ pwd /scratch/s/scinet/rzon
Project location
Users from groups with a RAC allocation will also have a project directory.
$PROJECT=/project/g/groupname/myccusername
IMPORTANT: Future-proof your scripts
Use the environment variables instead of the actual paths!
Storage Limits on Niagara
- Compute nodes do not have local storage.
- Archive space is on HPSS.
- Backup means a recent snapshot, not an achive of all data that ever was.
$BBUFFERstands for the Burst Buffer, a functionality that is still being setup,
but this will be node.
- From a Niagara login node, ssh to
nia-datamover controled through the annual RAC allocation.
Software and Libraries
Modules
Once you are on one of the login nodes, what software is already installed?
- Other than essentials, all software installed using module commands.
- sets environment variables (
PATH, etc.)
- Allows multiple, conflicting versions of package to be available.
- module spider shows available software..
intelFailed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "":): {\displaystyle \rightarrow}
intel/2018.2.
It is usually better to be explicit about the versions, for future reproducibility.
Handy abbreviations:
ml → module list ml NAME → module load NAME
Can I Run Commercial Software?
- Possibly, but you have to bring your own license for it.
- SciNet and Compute Canada have an extremely large and broad user base of thousands of users, so we cannot provide licenses for everyone's favorite software.
- Thus, the only commercial software installed. srun ./openmp_example # Just "./openmp_example" works too. srun ./mpi_example...
- Burst buffer (to come) is better for i/o heavy jobs and to speed up checkpoints.
Further information
Useful sites
- SciNet:
- Niagara:
- System Status:
- Training:
Support
- [email protected]
- [email protected] | https://docs.scinet.utoronto.ca/index.php?title=Main_Page&oldid=16 | 2021-01-16T02:21:22 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.scinet.utoronto.ca |
Prague is probably the most popular destination in Central Europe. It’s a great city filled with spectacular vistas and bridges and churches. It’s the city of a hundred spires, packed to the brim with Medieval churches, a castle, cobblestone paths. It’s very walkable, and the beer is cheaper than water (it basically is water?) . It’s a european city, so there’s also a thriving red light district and sex museums–fun for everyone, question mark. What do you expect, it was the capital of Bohemia at one point.
Praha is a classic destination, with art, culture and architecture abound. I spent a much too short 24 hours here, and while there are other cities I prefer, this is certain a must see for anyone traipsing around Europe looking for a good time.
The devil wears praha
Start your Bohemian adventure at the Old Town Square. This central plaza features the famous astronomical clock, which dates back to 1410. The clock of course has zodiac elements, astronomical elements of the sun and the moon and moving figures that put on a show every hour. It’s currently being renovated, but hopefully it will be back in action soon.
You’ll want to visit Prague Castle, apparently the largest ancient castle in the world. Of the Baroque style, this formidable structure looks out over the kingdom from a hill. This is where you’re going to capture the recognizable view of hundreds of orange roofs for as far as you can see.
The castle serves as the President of the Czech Republic’s home still, and also houses the crown jewels, hidden somewhere in the depths of the castle. Make sure you plan ahead and arrive in time for the changing of the guard, at noon daily.
St. Vitus Cathedral is a Gothic style cathedral that houses remains of many Czech Kings and Roman Emperors.
On your way back towards the Vtalya River, you’ll pass the Franz Kafta Museum dedicated to the prolific writer. We read “the Metamorphosis” in 7th grade, and this guy was emo before emo existed. Themes common to his writing are alienation/being an ousider, anxiety and existentialism — with fantastical events weaved into the story.
If you have the time, you should take a one hour river cruise, and float below the stone bridges while seeing the sights.
Then of course, stroll on the said Charles stone bridge and see the sculptures and neo-gothic architectural details (like the base of the Charles IV statue, this scene represents medicine)
And if you’re looking for extra luck, there’s a legend behind the St. John of Nepomuk statue. If you touch the falling priest (who was sentenced to death by King Wencelas– good king question mark? –for not disclosing the queen’s confessions), it’s supposed to make a wish come true. There’s also shiny spots on the dog and the queen, and since I touched the dog I guess that’s why I’m not very lucky. Shrugs.
Oh well, next time –if I’m lucky enough to return I’ll already count it as a win. Ahoj! | https://traveling-docs.com/2017/11/17/24-hours-in-prague/ | 2021-01-16T02:38:40 | CC-MAIN-2021-04 | 1610703499999.6 | [array(['https://travelingdocs.files.wordpress.com/2017/09/33_550109314520_6313_n.jpg?w=676',
'33_550109314520_6313_n.jpg'], dtype=object)
array(['https://travelingdocs.files.wordpress.com/2017/09/33_550109274600_3364_n.jpg?w=676',
'33_550109274600_3364_n.jpg'], dtype=object)
array(['https://travelingdocs.files.wordpress.com/2017/09/33_550109224700_475_n.jpg?w=676',
'33_550109224700_475_n.jpg'], dtype=object)
array(['https://travelingdocs.files.wordpress.com/2017/09/33_550109254640_2417_n-e1506906456117.jpg?w=676',
'33_550109254640_2417_n.jpg'], dtype=object)
array(['https://travelingdocs.files.wordpress.com/2017/09/33_550109264620_2921_n-e1506904051391.jpg?w=676',
'33_550109264620_2921_n.jpg'], dtype=object)
array(['https://travelingdocs.files.wordpress.com/2017/09/33_550109284580_3899_n.jpg?w=676',
'33_550109284580_3899_n.jpg'], dtype=object) ] | traveling-docs.com |
Module jdk.accessibility
Package com.sun.java.accessibility.util
Interface GUIInitializedListener
- All Superinterfaces:
EventListener
public interface GUIInitializedListener extends EventListenerThe
GUIInitializedListenerinterface is used by the
EventQueueMonitorclass to notify an interested party when the GUI subsystem has been initialized. This is necessary because assistive technologies can be loaded before the GUI subsystem is initialized. As a result, assistive technologies should check the
isGUIInitializedmethod of
EventQueueMonitorbefore creating any GUI components. If the return value is true, assistive technologies can create GUI components following the same thread restrictions as any other application. If the return value is false, the assistive technology should register a
GUIInitializedListenerwith the
EventQueueMonitorto be notified when the GUI subsystem is initialized. | https://docs.huihoo.com/java/javase/9/docs/api/com/sun/java/accessibility/util/GUIInitializedListener.html | 2021-07-24T02:00:37 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.huihoo.com |
Project Overview: Application Design
The remainder of Part III of the tutorial consists of a series of instructions and exercises that guide you through the completion of a simple contact management system. The system allows users to view and update contact information (contacts and their phone numbers) stored in Caché. The application consists of a .NET Windows Form connected to a Caché database using ADO.NET and the relational interface to the CMP. The database schema and the Windows form have been provided for you. You will only be adding the code that connects the form to Caché.
The application uses ADO.NET's “Disconnected” approach. Here is how it works:
The application creates a DataSet object that mimics the layout of the Caché tables.
CacheDataAdapter objects connect to Caché and fill the DataSet with data. There are two data souces: the Contact table and the PhoneNumber table.
Responding to user requests, the application makes changes to the DataSet data.
The DataAdapter objects reconnect to Caché and both update the database and refresh the DataSet data.
The tutorial provides. | https://docs.intersystems.com/latest/csp/docbook/Doc.View.cls?KEY=TCMP_ExampleOverview2part1 | 2021-07-24T00:59:55 | CC-MAIN-2021-31 | 1627046150067.87 | [array(['images/tcmp_appdesign.png', 'generated description: appdesign'],
dtype=object) ] | docs.intersystems.com |
Upgrade the PAN-OS Software Version Using Panorama
Follow these steps to upgrade your Panorama-managed firewalls to PAN-OS 9.1. 9.1 before upgrading the managed firewalls to this version. In addition, when upgrading Log Collectors to 9.1, you must upgrade all Log Collectors at the same time due to changes in the logging infrastructure.
- Plan for an extended maintenance window of up to six hours when upgrading Panorama to 9.1. This release includes significant infrastructure changes, which means that the Panorama upgrade will take longer than in previous releases.
- Ensure that firewalls are connected to a reliable power source. A loss of power during an upgrade can make a firewall unusable.
- After upgrading Panorama, commit and push the configuration to the firewalls you are planning to upgrade.
- 9.1. Make sure to follow the Best Practices for Application and Threat.
- Download the target PAN-OS.
- Install the PAN-OS software update on the firewalls.
- ClickInstallin the Action column that corresponds to the firewall models you want to upgrade.
-).
- Select.DeviceHigh AvailabilityOperational CommandsSuspend local device
-. | https://docs.paloaltonetworks.com/vm-series/10-0/vm-series-deployment/about-the-vm-series-firewall/upgrade-the-vm-series-firewall/upgrade-the-pan-os-software-version-using-panorama.html | 2021-07-24T01:24:13 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.paloaltonetworks.com |
In this section:
Introduction Data File (.bdf)
Build information, such as the working directory, command line options for the compilation, and link processes of the original build, are stored in a file called the build data file. The following example is a fragment from a build data file:
------- cpptestscan v. 9.4.x.x ------- working_dir=/home/place/project/hypnos/pscom project_name=pscom arg=g++ arg=-c arg=src/io/Path.cc arg=-Iinclude arg=-I. arg=-o arg=/home/place/project/hypnos/product/pscom/shared/io/Path.o.
Note
When the build runs in multiple directories:
- If you do not specify output file, then each source build directory will have its own .bdf file. This is good for creating one project per source directory.
- If you want a single project per source tree, then a single .bdf file needs to be specified, as shown in the above example..
Note
If the compiler and/or linker executable names do not match default
cpptesttrace command patterns, they you will need to use
--cpptesttraceTraceCommand option described below to customize them. Default cpptestscan command trace patterns can be seen by running 'cpptesttrace --cpptesttraceHelp' command..
Note
The
cpptestscan and
cpptesttrace utilities can be used in the parallel build systems where multiple compiler executions can be done concurrently. When preparing Build Data File on the multicore machine, for example, you can pass the
-j <number_of_parallel_jobs> parameter to the GNU make command to build your project and quickly prepare the Build Data File.
When should I use cpptestscan?
It is highly recommended that the procedures to prepare a build data file are integrated with the build system. In this way, generating the build data file can be done when the normal build is performed without additional actions.
To achieve this, prefix your compiler and linker executables with the cpptestscan utility in your Makefiles / build scripts.
When should I use cpptesttrace?
Use cpptesttrace as the prefix for the whole build command when modifying your Makefiles / build scripts isn’t possible or when prefixing your compiler / linker executables from the build command line is too complex. | https://docs.parasoft.com/pages/viewpage.action?pageId=73208899 | 2021-07-24T01:03:02 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.parasoft.com |
Integrating with Integrately
In this article, we will explain how to send your leads to Integrately.
Step 1: Log in to Integrately and Select Swipe Pages.
Step 2: Now select another app in which you want to send the leads from Swipe Pages. In the article, we will be creating a new contact in Convert Kit when a form is submitted in Swipe Pages. Please note the process will similar to other apps as well.
Step 3: You can select the 1 click automation if available or create your own if possible.
In this article, we will use the 1 click automation.
Step 5: Choose 1 click automation and select Check It Now once the automation is ready.
Step 6: Select Add Connection under Swipe Pages.
Step 7: Copy the Webhook URL and tick the checkbox for I have sent a test record AFTER Setting URL in Swipe Pages.
Step 8: Log in to Swipe Pages and select the landing page from which you want to send leads.
Step 9: Proceed to the Integration Tab and select Webhooks.
Step 10: Add a name for your Webhook and Paste the Incoming Webhook Url which was copied in Step 7 and select Continue.
Step 11: Select Post as the method and select Continue. Proceed to select Continue in next step Fields as well.
Step 12: Add Additional Fields if required and select Continue.
Step 13: Proceed to your Integrately Account and Select Test Connection.
Step 14: Proceed to your published page and submit the form. These details will be captured in Integrately as shown below.
Step 15: Select Add Connection for Convert Kit (Please note you can add any apps which are available in Integrately).
Step 16: Add your API Secret from Convert Kit and Select the Convert Kit Account from Select Connection.
Step 17: Map the Webhook response received in Swipe Pages with Convert Kit.
Step 18: Select Test & Go Live.
That's it, You have now connected Swipepages with Integrately. The leads details will be sent to Convert Kit via Integrately.
Please note you can follow the similar process mentioned in this docs to integrate Swipe Pages with other apps available in Integrately. For more information regarding Integrately. Please check their documentation. | https://docs.swipepages.com/article/106-integrating-with-integrately | 2021-07-24T01:45:51 | CC-MAIN-2021-31 | 1627046150067.87 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5fe4986440f24b1b2aaf44ea/file-0lBv2qnfii.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5fe49884495a886ca7827c76/file-UUtxCqGivN.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5fe4989941fcb56e4047bed3/file-IvHKsMrvnV.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5fe498b441fcb56e4047bed4/file-d6XnC9MxC4.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5fe498d1495a886ca7827c77/file-ZyclVC1cnE.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5fe498df495a886ca7827c78/file-52DCzP4yU7.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5fe499096451e825e3b8dffe/file-OtyUGDzq2t.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5fe2069840f24b1b2aaf37e4/file-xnFrZ2Mpcq.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5fe49946495a886ca7827c7a/file-EnCgXRyXWl.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5fe2073e41fcb56e4047b23e/file-r2HRxPTirs.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5fd8be5db624c71b798599c1/file-jFHwEjvQ0v.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5fe4997f495a886ca7827c7b/file-jgLsYWtM9v.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5fe4998f40f24b1b2aaf44ed/file-Zvw0CSJZBW.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5fe4999e41fcb56e4047bed7/file-8B986adW2v.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5fe499b4495a886ca7827c7c/file-cGpqxCI2QC.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5fe499c441fcb56e4047bed8/file-HboSjA9579.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5fe49a4141fcb56e4047bedb/file-AoHg6as5Mk.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecce0232c7d3a5ea54bbe81/images/5fe49a5d495a886ca7827c7f/file-Oxiv1xmq49.png',
None], dtype=object) ] | docs.swipepages.com |
Create a Bot
When a customer chats in through TileDesk, you may not always be around to respond. Luckily, you can configure a Bot to help you and then activates it in a Department.
To create a Bot go to the dashboard sidebar and click Settings > Bots.
Then use the ADD BOT button to create the bot.
Choose the bot type
Enter the name to identify your bot and optionally the description.
Native Bot
If you have selected Native Bot you can decide to add the FAQs or return to the bots list and add the FAQs later.
You can enter the FAQs manually or upload them via csv and then run test queries.
Use the ADD FAQ button to manually enter a FAQ.
Fill in the “Question” and “Answer” fields and create the new FAQ.
You can quickly enter a list of “FAQs” via CSV.
Use the “UPLOAD FAQs” button. Enter the symbol used to separate the columns and load the .CSV file.
Once you have uploaded your “FAQs” you can proceed to test them. Click the “BOT QUERY” button.
Enter your question and press “RUN TEST” to display the answer FAQ.
Chat bot routing rules
Now you can easily setting up the routing rules of your bot. Follow this instructions. | https://docs.tiledesk.com/knowledge-base/create-a-bot/ | 2020-11-24T03:31:56 | CC-MAIN-2020-50 | 1606141171077.4 | [array(['https://i1.wp.com/docs.tiledesk.com/wp-content/uploads/2020/09/create-a-bot.png?resize=1024%2C396&ssl=1',
None], dtype=object)
array(['https://i2.wp.com/docs.tiledesk.com/wp-content/uploads/2020/09/add-a-bot-button.png?resize=766%2C289&ssl=1',
None], dtype=object)
array(['https://i2.wp.com/docs.tiledesk.com/wp-content/uploads/2020/09/image-2.png?resize=1024%2C325&ssl=1',
None], dtype=object)
array(['https://i0.wp.com/docs.tiledesk.com/wp-content/uploads/2020/09/bot-name.png?resize=776%2C320&ssl=1',
None], dtype=object)
array(['https://i1.wp.com/docs.tiledesk.com/wp-content/uploads/2020/09/add-Faqs-option.png?resize=787%2C449&ssl=1',
None], dtype=object)
array(['https://i1.wp.com/docs.tiledesk.com/wp-content/uploads/2020/09/bot-dashboard-view.png?resize=1024%2C487&ssl=1',
None], dtype=object)
array(['https://i1.wp.com/docs.tiledesk.com/wp-content/uploads/2020/09/add-faq.png?resize=560%2C190&ssl=1',
None], dtype=object)
array(['https://i1.wp.com/docs.tiledesk.com/wp-content/uploads/2020/09/question-aswer-fields.png?resize=1024%2C483&ssl=1',
None], dtype=object)
array(['https://i0.wp.com/docs.tiledesk.com/wp-content/uploads/2020/09/upload-faq-450x135-1.png?resize=450%2C135&ssl=1',
None], dtype=object)
array(['https://i0.wp.com/docs.tiledesk.com/wp-content/uploads/2020/09/import-csv-file-1.png?resize=770%2C429&ssl=1',
None], dtype=object)
array(['https://i0.wp.com/docs.tiledesk.com/wp-content/uploads/2020/09/test-bot-query.png?resize=556%2C182&ssl=1',
None], dtype=object)
array(['https://i1.wp.com/docs.tiledesk.com/wp-content/uploads/2020/09/bot-test-interface.png?resize=912%2C537&ssl=1',
None], dtype=object) ] | docs.tiledesk.com |
network_connections
Prototype:
network_connections()
Description: Return the list of current network connections.
The returned data container has four keys:
tcphas all the TCP connections over IPv4
tcp6has all the TCP connections over IPv6
udphas all the UDP connections over IPv4
udp6has all the UDP connections over IPv6
Under each key, there's an array of connection objects that all look like this:
{ "local": { "address": "...source address...", "port": "...source port..." }, "remote": { "address": "...remote address...", "port": "...remote port..." }, "state": "...connection state..." }
The address will be either IPv4 or IPv6 as appropriate. The port will
be an integer stored as a string. The state will be a string like
UNKNOWN.
Note: This function is supported on Linux.
On Linux, usually a state of
UNKNOWN and a remote address
0.0.0.0
or
0:0:0:0:0:0:0:0 with port
0 mean this is a listening IPv4 and
IPv6 server. In addition, usually a local address of
0.0.0.0 or
0:0:0:0:0:0:0:0 means the server is listening on every IPv4 or IPv6
interface, while
127.0.0.1 (the IPv4 localhost address) or
0:100:0:0:0:0:0:0 means the server is only listening to connections
coming from the same machine.
A state of
ESTABLISHED usually means you're looking at a live
connection.
On Linux, all the data is collected from the files
/proc/net/tcp,
/proc/net/tcp6,
/proc/net/udp, and
/proc/net/udp6.
Example:
vars: "connections" data => network_connections();
Output:
The SSH daemon:
{ "tcp": [ { "local": { "address": "0.0.0.0", "port": "22" }, "remote": { "address": "0.0.0.0", "port": "0" }, "state": "UNKNOWN" } ] }
The printer daemon listening only to local IPv6 connections on port
631:
"tcp6": [ { "local": { "address": "0:100:0:0:0:0:0:0", "port": "631" }, "remote": { "address": "0:0:0:0:0:0:0:0", "port": "0" }, "state": "UNKNOWN" } ]
An established connection on port 2200:
"tcp": [ { "local": { "address": "192.168.1.33", "port": "2200" }, "remote": { "address": "1.2.3.4", "port": "8533" }, "state": "ESTABLISHED" } ]
History: Introduced in CFEngine 3.9
See also:
sys.inet,
sys.inet6. | https://docs.cfengine.com/docs/3.15/reference-functions-network_connections.html | 2020-11-24T04:09:30 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.cfengine.com |
A Guide On How to Maximize the Effect of. To maximize and determine your CBD dosage you need to look at these key points that we are going to discuss in this article..
Starting low is another good method of determining and maximizing the CBD dosage. After a few weeks, you need to gradually increase the small dosage of the CBD that you started low. last factor to consider when maximizing the usage of CBD dosage considering its concentration, click here for more info. CBD products with higher concentration are stronger, therefore beginners will need to take less than those who normally use them. To summarize, those are the factors that you need to consider when maximizing the usage of the CBD products.
Supporting reference: get more | http://docs-prints.com/2020/09/19/what-has-changed-recently-with/ | 2020-11-24T04:17:33 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs-prints.com |
Things to Note When Choosing an Electric Company
Choosing electric company may not be easy, therefore this website here! Check it out! to learn more about this site. Learning more ids an ideal thing. That is why you should check this website.
The best thing that you have to consider is doing research at any time of the day that you may be choosing the best company. This is an ideal thing because different companies may be doing well in different areas, therefore, you will have it in a better state at any time. You should not always go for th4e first company that you come across being that there vare several of them in the market. The best thing that you have to ensure is that you are choosing the best company by doing the research. Research can be done by considering the popularity of the company that you may like to choose at any time. You have to choose the company that is giving the best services at any time of the day. You will, therefore, get it easy is that you will do anything at any time. The best thing is that you have to do better research at any time of the day.
You should make sure that you have a quote from he selected company at any time of the day. This will, therefore, make the budgeting easy at any time of the day. It is a better since you will know the rate of electricity that you may be using at any time of the day. This will then make you in a place that you will have to consider several things being that you will have a lot of things that you may like to consider at any time.. This is an important thing with a reason being that different companies give different services at any time of the day. Therefore you have to make sure that you are selecting that company with a better reputation at any time. The reason is that some may be enjoying the services while others may not be enjoying the services. The best thing that you have to do is to make sure that you choose a better company at any time of the day. | http://docs-prints.com/2020/09/30/on-my-experience-explained-2/ | 2020-11-24T03:07:04 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs-prints.com |
Configuring OSPF
The NetScaler supports Open Shortest Path First (OSPF) Version 2 (RFC 2328). The features of OSPF on the NetScaler are:
- If a vserver is active, the host routes to the vserver can be injected into the routing protocols.
- OSPF can run on any subnet.
- Route learning advertised by neighboring OSPF routers can be disabled on the NetScaler.
- The NetScaler can advertise Type-1 or Type-2 external metrics for all routes.
- The NetScaler can advertise user-specified metric settings for VIP routes. For example, you can configure a metric per VIP without special route maps.
- You can specify the OSPF area ID for the NetScaler.
- The NetScaler supports not-so-stubby-areas (NSSAs). An NSSA is similar to an OSPF stub area but allows injection of external routes in a limited fashion into the stub area. To support NSSAs, a new option bit (the N bit) and a new type (Type 7) of Link State Advertisement (LSA) area have been defined. Type 7 LSAs support external route information within an NSSA. An NSSA area border router (ABR) translates a type 7 LSA into a type 5 LSA that is propagated into the OSPF domain. The OSPF specification defines only the following general classes of area configuration:
- Type 5 LSA: Originated by routers internal to the area are flooded into the domain by AS boarder routers (ASBRs).
- Stub: Allows no type 5 LSAs to be propagated into/throughout the area and instead depends on default routing to external destinations.
After enabling OSPF, you need to configure advertisement of OSPF routes. For troubleshooting, you can limit OSPF propagation. You can display OSPF settings to verify the configuration.
Enabling and Disabling OSPFEnabling and Disabling OSPF
To enable or disable OSPF, you must use either the NetScaler command line or the NetScaler GUI. When OSPF is enabled, the NetScaler starts the OSPF process. When OSPF is disabled, the NetScaler stops the OSPF routing process.
To enable or disable OSPF routing by using the NetScaler command line:
At the command prompt, type one of the following commands:
enable ns feature OSPF
disable ns feature OSPF
To enable or disable OSPF routing by using the NetScaler GUI:
- Navigate to System > Settings, in Modes and Features group, click Change advanced features.
- Select or clear the OSPF Routing option.
Advertising OSPF RoutesAdvertising OSPF Routes
OSPF enables an upstream router to load balance traffic between two identical virtual servers hosted on two standalone NetScaler appliances. Route advertising enables an upstream router to track network entities located behind the NetScaler.
To configure OSPF to advertise routes by using the VTYSH command line:
At the command prompt, type the following commands, in the order shown:
Example:
>VTYSH NS# configure terminal NS(config)# router OSPF NS(config-router)# network 10.102.29.0/24 area 0 NS(config-router)# redistribute static NS(config-router)# redistribute kernel
Limiting OSPF PropagationsLimiting OSPF Propagations
If you need to troubleshoot your configuration, you can configure listen-only mode on any given VLAN.
To limit OSPF propagation by using the VTYSH command line:
At the command prompt, type the following commands, in the order shown:
Example:
>VTYSH NS# configure terminal NS(config)# router OSPF NS(config-router)# passive-interface VLAN0
Verifying the OSPF ConfigurationVerifying the OSPF Configuration
You can display current OSPF neighbors, and OSPF routes.
To view the OSPF settings by using the VTYSH command line:
At the command prompt, type the following commands, in the order shown:
Example:
>VTYSH NS# sh ip OSPF neighbor NS# sh ip OSPF route
Configuring Graceful Restart for OSPF
In a non-INC high availability (HA) setup in which a routing protocol is configured, after a failover, routing protocols are converged and routes between the new primary node and the adjacent neighbor routers are learned. Route learning take some time to complete. During this time, forwarding of packets is delayed, network performance might get disrupted, and packets might get dropped.
Graceful restart enables an HA setup during a failover to direct its adjacent routers to not remove the old primary node’s learned routes from their routing databases. Using the old primary node’s routing information, the new primary node and the adjacent routers immediately start forwarding packets, without disrupting network performance.
To configure graceful restart for OSPF by using the VTYSH command line, at the command prompt, type the following commands, in the order shown: | https://docs.citrix.com/en-us/netscaler/11-1/networking/ip-routing/configuring-dynamic-routes/configuring-ospf.html | 2020-11-24T03:54:41 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.citrix.com |
A stock item keeps the available inventory of an SKU in a given stock location. When you create a line item, the associated SKU must be available in one of the market's stock locations. When you place an order, the stock item quantities get decremented. When an order is canceled, or a return is approved, the stock item quantities get incremented.
A stock item object is returned as part of the response body of each successful create, list, retrieve, or update API call. | https://docs.commercelayer.io/api/resources/stock_items | 2020-11-24T04:06:54 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.commercelayer.io |
Documents
Forms and documents from Rapid7 Services on the Insight Platform are both pivotal to your onboarding process, though they are slightly different. Documents are downloadable and open in a different application (such as Word, Notepad, etc).
During the onboarding process, you may need to share service documents with the Rapid7 Team. To do this, select the "View Service Documents" step.
A peek panel will appear on the right, where you can see what documents are already present. Select View More Documents to go to the Document repository.
You can Manage Documents from the Rapid7 Team or ones that you upload.
You can upload documents up to 5GB in size.
Did this page help you? | https://docs.rapid7.com/services/documents/ | 2020-11-24T03:55:23 | CC-MAIN-2020-50 | 1606141171077.4 | [array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/services/images/8aaa45e-phase_1.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/5749334d28edb74f382195092590608b0d5e3e73/services/images/f35d4e9-Screen_Shot_2018-09-11_at_4.22.13_PM.png',
None], dtype=object) ] | docs.rapid7.com |
Creating NSX-T Objects.
Installing VMware Enterprise PKS on vSphere with NSX-T requires the creation of NSX IP blocks for Kubernetes node and pod networks, as well as a Floating IP Pool from which you can assign routable IP addresses to cluster resources.
Create separate NSX-T IP Blocks for the node networks and the pod networks. The subnets for both nodes and pods should have a size of 256 (/16). For more information, see Plan IP Blocks and Reserved IP Blocks. For more information about NSX-T IP Blocks, see Advanced IP Address Management in the VMware NSX-T Data Center documentation.
- NODE-IP-BLOCK is used by Enterprise PKS to assign address space to Kubernetes master and worker nodes when new clusters are deployed or a cluster increases its scale.
- POD-IP-BLOCK is used by the NSX-T Container Plug-in (NCP) to assign address space to Kubernetes pods through the Container Networking Interface (CNI).
In addition, create a Floating IP Pool from which to assign routable IP addresses to components. This network provides your load balancing address space for each Kubernetes cluster created by Enterprise PKS. The network also provides IP addresses for Kubernetes API access and Kubernetes exposed services. For example,
10.172.2.0/24 provides 256 usable IPs. This network is used when creating the virtual IP pools, or when the services are deployed. You enter this network in the Floating IP Pool ID field in the Networking pane of the Enterprise PKS tile.
Complete the following instructions to create the required NSX-T network objects.
Create the Nodes IP Block
In NSX Manager, go to Advanced Networking & Security > Networking > IPAM.
Add a new IP Block for Kubernetes Nodes. For example:
- Name: NODES-IP-BLOCK
- CIDR: 192.168.0.0/16
Verify creation of the Nodes IP Block.
Record the UUID of the Nodes IP Block object. You use this UUID when you install Enterprise PKS with NSX-T.
Create the Pods IP Block
In NSX Manager, go to Advanced Networking & Security > Networking > IPAM.
Add a new IP Block for Pods. For example:
- Name: PKS-PODS-IP-BLOCK
- CIDR: 172.16.0.0/16
Verify creation of the Pods IP Block.
Record the UUID of the Pods IP Block object. You use this UUID when you install Enterprise PKS with NSX-T.
Create Floating IP Pool
In NSX Manager, go to Advanced Networking & Security > Inventory > Groups > IP Pool.
Add a new Floating IP Pool. For example:
- Name: PKS-FLOATING-IP-POOL
- IP Ranges: 10.40.14.10 - 10.40.14.253
- Gateway: 10.40.14.254
- CIDR: 10.40.14.0/24
Verify creation of the Nodes IP Block.
Get the UUID of the Floating IP Pool object. You use this UUID when you install Enterprise PKS with NSX-T.
Next Step
After you complete this procedure, follow the instructions in Installing Enterprise PKS on vSphere with NSX-T.
Please send any feedback you have to [email protected]. | https://docs.pivotal.io/pks/1-6/nsxt-create-objects.html | 2020-11-24T03:49:35 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.pivotal.io |
Genesys Schedule-based Routing (EE04) for Genesys Engage on premises
Contents
What's the challenge?Ensuring that employees adhere to their schedules is a headache for contact center leaders. When staff are late taking breaks or starting different scheduled work, it impacts your service levels, your sales revenues and your costs.
What's the solution?Routing interactions based on your workforce management schedules and staff skills can help ensure a better balanced workload for employees and improved schedule adherence.
Story and Business Context
Enrich any of the existing use cases handling inbound interactions with the ability to route calls based on WFM schedules. Doing so can help ensure a more-balanced multi-skill workload for agents and improvement in schedule adherence. Routing strategies can route based on the anticipated availability of an agent. For example, interactions are not routed to agents immediately before they are scheduled for a break or a meeting. This improves agent adherence and leads to better customer service and worker efficiency.
Use Case Benefits
Summary
Schedule-based routing is a powerful tool enabling contact centers to optimize employee satisfaction while reduce attrition and unnecessary overtime all while providing better coverage.
Use Case Definition
Business Flow
Business Flow Description
- The customer contacts the company by one of the following channels:
- Voice
- SMS
- Social
- *Alternatively, a new task may be created by a 3rd party source system for distribution by the Genesys system
- One of the use cases for the corresponding channel, processes the call and determines the skill profile required to handle the interaction.
- The skill profile is matched with the corresponding activity in WFM
- Genesys will identify the agents, which are currently scheduled to work on this activity. Cut off times will be taken into account, i.e. an agent shortly before his break will not receive an interaction which usually has a long average handling time.
- Genesys will check if one of these agents is available. If yes, it will distribute the interaction to this agent
- If no, Genesys will queue the call until one of these agents becomes available or a time out is reached
- If the time out is reached, the distribution logic will continue with skill-based routing and subsequent target expansions as defined in the underlying use case.
For more details
For additional details, contact your Genesys Sales Representative by filing out the form or for immediate assistance call us: 1-888-Genesys. | https://all.docs.genesys.com/UseCases/Public/PureEngage/EE04 | 2020-11-24T04:14:55 | CC-MAIN-2020-50 | 1606141171077.4 | [] | all.docs.genesys.com |
Rapid7 Universal VPN
If Rapid7 does not support the logging format of your VPN solution, you can still send data into InsightIDR’s User Behavior Analytics engine so long as you transform your logs to meet this universal event format (UEF) contract.
The Universal VPN event source supports multiple event types:
- VPN_SESSION_IP_ASSIGNED
- VPN_SESSION_TERMINATION
Need help transforming your logs?
Read instructions on transforming your logs in this Rapid7 blog post or on the Transform Logs to UEF help page.
Required Fields for VPN_SESSION_IP_ASSIGNED
When you initiate a VPN session, you must send InsightIDR a VPN_SESSION_IP_ASSIGNED event with the particular IP address assigned to the account.
Do not send VPN session events from devices where multiple user accounts use a single IP.
InsightIDR only supports monitoring VPN sessions when the assigned IP is fixed to a single user throughout their VPN session, and then returned to a pool when the VPN session terminates. If this is not followed, it will lead to unexpected behaviors and detections in the product.
Required Fields for VPN_SESSION_IP_ASSIGNED with Ingress
You can optionally send the following additional fields with the VPN_SESSION_IP_ASSIGNED event, which enables InsightIDR to detect and visualize ingress activity from this event:
- source_ip
- authentication_target
- authentication_result
If all three fields are present and valid (in addition to the regular VPN fields), the Universal VPN event source will also interpret a Universal VPN Event as both a VPN action and ingress activity as defined in the Rapid7 Universal Ingress Authentication document.
If one or more of these three fields are present but missing data, InsightIDR will drop the VPN and Ingress event entirely.
All optional fields must be present and valid in order for the VPN and Ingress event in order to be accepted.
Required Fields for VPN_SESSION_TERMINATION
When you terminate a VPN session, you must send a VPN_SESSION_TERMINATION event with the account whose session is being terminated. Once the session has been terminated, InsightIDR will no longer attribute activity to the user based on IP address that was assigned when the session was initiated.
If your log does not include the necessary info to produce VPN_SESSION_TERMINATION events, the assigned IP will be transferred over to the new session when the IP is observed by another VPN_SESSION_IP_ASSIGNED event.
Example Format
You must send events to the InsightIDR collector in UTF-8 format, with each log line representing a single event, and a newline delimiting each event.
For example:
{"event_type":"VPN_SESSION_IP_ASSIGNED","version":"v1","time":"2018-06-07T18:18:31.123Z","account":"jdoe","assigned_ip":"10.6.100.40","source_ip":"33.5.45.40","authentication_result":"SUCCESS","authentication_target":"Boston Office VPN"}
Each event sent to InsightIDR must not contain newline characters; InsightIDR only permits newlines that delimit separate Universal Events.
Here are some examples of Universal VPN Events with readable formatting:
1{2"event_type": "VPN_SESSION_IP_ASSIGNED",3"version": "v1",4"time": "2018-06-07T18:18:31.123Z",5"account": "jdoe",6"assigned_ip": "10.6.100.40",7"source_ip": "33.5.100.40",8"authentication_result": "SUCCESS",9"authentication_target": "Boston Office VPN"10}
Or:
1{2"event_type": "VPN_SESSION_IP_ASSIGNED",3"version": "v1",4"time": "2018-06-07T18:18:31.123Z",5"account": "jdoe",6"account_domain": "CORP",7"assigned_ip": "10.6.100.40"8}
Or:
1{2"event_type": "VPN_SESSION_IP_ASSIGNED",3"version": "v1",4"time": "2018-06-07T18:18:31.123Z",5"account": "jdoe",6"account_domain": "CORP",7"assigned_ip": "10.6.100.40",8"custom_data": {9"key":"value"10}11}12
Or:
12{3"event_type": "VPN_SESSION_TERMINATION",4"version": "v1",5"time": "2018-06-07T18:18:31.123Z",6"account": "jdoe",7"account_domain": "CORP"8} | https://docs.rapid7.com/insightidr/rapid7-universal-vpn/ | 2020-11-24T03:58:53 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.rapid7.com |
Importing Complete Workflow Packs
When your export file contains a complete workflow pack and this file was marked as complete during the creation, you will be prompted to overwrite your existing business logic configuration with the new one from the file.
INFO: For details, see Complete Workflow Packs and Complete Output Files.
If you want to replace your existing configuration, choose the overwrite option. The import will first delete all business logic configuration for the object class and then add the new configuration from the file. If your database already contains security roles, the system will attempt to assign imported actions
If you do not want to replace your existing configuration, the import will simply add new settings to you existing configuration and will not affect your security role assignments, just as it does when you are using a regular export file.
INFO: For details, see Importing Configuration Settings .
To import a workflow pack from a complete export file:
- Click Import Settings on the Standard toolbar to bring up the Import Configuration From dialog box.
- Browse for the export file you want to import and click Open. The Import Configuration Settings dialog box opens.
- Review the full file name of the export file, the version of the Alloy Navigator application it was generated with, and the description of the file content.
NOTE: We recommend that you do not attempt to import settings from an earlier version of Alloy Navigator than the one you are running.
- If you want to preview the file content, click the Show File Content button to bring up the File Content dialog box. The Content section displays the Settings nodes to which the settings will be imported. The number of items to be imported is shown in parentheses next to the node name.
- If you want to overwrite the existing business logic configuration with the new one from the file, select the Overwrite the existing workflow for [Object Class] check box. Otherwise, leave this check box clear and proceed to Import step.
- Under Backup, leave the Enable backup check box selected and specify where to back up your existing configuration before doing the import. Your backup .xml file will be also marked as complete, just as the source export file. This will allow you to restore your workflow by importing it from this file, if needed.
- Click Import.
- The system confirms when import has been successful. Click OK. | https://docs.alloysoftware.com/alloynavigator/8/docs/adminguide/adminguide/exporting-and-importing-configuration/importing-complete-workflow-packs.htm | 2020-11-24T04:18:02 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.alloysoftware.com |
Create backup proxy pool
Overview
You can achieve backup proxy load balancing with backup proxy pools.
The backup proxy pool is a collection of backup proxies. Backup proxy pools eliminate the need to map the virtual machines to an individual backup proxy manually. All the backup requests from the virtual machines mapped to a pool are assigned to backup proxies within the pool based on the load balancing mechanism.
Phoenix assigns a backup request to a backup proxy within a pool based on the following parameters in the order of priority:
- Backup Now: The Backup Now job is triggered by the VMware details page.
- Aging: The number of failed backup jobs since the last successful backup.
- Hot add or NBD: The Transport Modes to read the vmdk files.
You must ensure that there is an optimum number of backup proxies with sufficient resources deployed in each pool. If a backup proxy within a pool is disconnected, the backup or restore job is assigned to the next available backup proxy within the pool. For more information, see Resource Sizing for a backup proxy.
A default backup proxy pool is created for every registered vCenter/ESXi. Any new backup proxy deployed in the registered vCenter/ESXi is added to the default proxy pool. You can create new pools within the registered vCenter/ESXi and move or add new backup proxies.
Note: You cannot rename or delete the default backup proxy pool.
Best practices for creating backup proxy pools
The following diagram illustrates the best practices for creating a backup proxy pool
As illustrated in the diagram:
- You must create the backup proxy pools with respect to a data center. Typically, data centers are associated with a geographical region.
Example: If you have a data center 1 that is created for Australia, and data center 2 that is created for North America, you must create exclusive backup proxy pools that are associated with the respective data center i.e. geographical region.
- Within a data center, there can be geographically distributed clusters, in such cases, you can create a separate backup proxy pool for each cluster.
Example: A data center is dedicated to APAC. Within the APAC data center, there can be multiple clusters spread across geographies, for example, one in Australia, other in India. In such a case, create a separate backup proxy pool for Australia and India.
- Ensure that the virtual machines that belong to a certain region are backed up by backup proxies deployed for the same region.
Example: If you are backing up virtual machines in the Australia region, ensure that they are backed up by backup proxies that are deployed in the backup proxy pool for Australia.
- When you configure a virtual machine for backup, ensure that the virtual machine belonging to a specific data center is mapped to the backup proxy pool created for that data center. This helps to avoid the data from traveling over WAN across geographical regions.
Procedure
- Log in to the Phoenix Management Console.
- Under Product & Services > Phoenix, click VMware.
- Select your organization.
- The All VvCenter/ESXi Hosts page appears that lists all the registered vCenter/hypervisors.
- You can either select the registered vCenter/hypervisors from the list or select it from the vCenter/ESXi host list in the left navigation pane.
- In the left pane, click Backup Proxies.
- Click Create Proxy Pool.
- Only for vCenter server, select the datacenter and click Create New Pool.
Note: You can also create a new pool while deploying additional proxies.
Related topics | https://docs.druva.com/Phoenix/030_Configure_Phoenix_for_Backup/010_Backup_and_Restore_VMware/020_Deploy_Phoenix_Backup_Proxy/060_Backup_Proxy_Load_Balancing | 2020-11-24T03:57:00 | CC-MAIN-2020-50 | 1606141171077.4 | [array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2',
'File:/tick.png'], dtype=object)
array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2',
'File:/cross.png'], dtype=object)
array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2',
'File:/tick.png'], dtype=object)
array(['https://docs.druva.com/@api/deki/files/54745/Proxy_pool.gif?revision=1&size=bestfit&width=677&height=223',
'Proxy_pool.gif'], dtype=object) ] | docs.druva.com |
In this tutorial, we'll see how to use the data block API on a variety of tasks and how to debug data blocks. The data block API takes its name from the way it's designed: every bit needed to build the
DataLoaders object (type of inputs, targets, how to label, split...) is encapsulated in a block, and you can mix and match those blocks
The rest of this tutorial will give many examples, but let's first build a
DataBlock from scratch on the dogs versus cats problem we saw in the vision tutorial. First we import everything needed in vision.
from fastai.data.all import * from fastai.vision.all import *
The first step is to download and decompress our data (if it's not already done) and get its location:
path = untar_data(URLs.PETS)
And as we saw, all the filenames are in the "images" folder. The
get_image_files function helps get all the images in subfolders:
fnames = get_image_files(path/"images")
dblock = DataBlock()
By itself, a
DataBlock is just a blue print on how to assemble your data. It does not do anything until you pass it a source. You can choose to then convert that source into a
Datasets or a
DataLoaders by using the
DataBlock.datasets or
DataBlock.dataloaders method. Since we haven't done anything to get our data ready for batches, the
dataloaders method will fail here, but we can have a look at how it gets converted in
Datasets. This is where we pass the source of our data, here all our filenames:
dsets = dblock.datasets(fnames) dsets.train[0]
(Path('/home/jhoward/.fastai/data/oxford-iiit-pet/images/Maine_Coon_91.jpg'), Path('/home/jhoward/.fastai/data/oxford-iiit-pet/images/Maine_Coon_91.jpg'))
By default, the data block API assumes we have an input and a target, which is why we see our filename repeated twice.
The first thing we can do is use a
get_items function to actually assemble our items inside the data block:
dblock = DataBlock(get_items = get_image_files)
The difference is that you then pass as a source the folder with the images and not all the filenames:
dsets = dblock.datasets(path/"images") dsets.train[0]
(Path('/home/jhoward/.fastai/data/oxford-iiit-pet/images/Persian_76.jpg'), Path('/home/jhoward/.fastai/data/oxford-iiit-pet/images/Persian_76.jpg'))
Our inputs are ready to be processed as images (since images can be built from filenames), but our target is not. Since we are in a cat versus dog problem, we need to convert that filename to "cat" vs "dog" (or
True vs
False). Let's build a function for this:
def label_func(fname): return "cat" if fname.name[0].isupper() else "dog"
We can then tell our data block to use it to label our target by passing it as
get_y:
dblock = DataBlock(get_items = get_image_files, get_y = label_func) dsets = dblock.datasets(path/"images") dsets.train[0]
(Path('/home/jhoward/.fastai/data/oxford-iiit-pet/images/pug_160.jpg'), 'dog')
Now that our inputs and targets are ready, we can specify types to tell the data block API that our inputs are images and our targets are categories. Types are represented by blocks in the data block API, here we use
ImageBlock and
CategoryBlock:
dblock = DataBlock(blocks = (ImageBlock, CategoryBlock), get_items = get_image_files, get_y = label_func) dsets = dblock.datasets(path/"images") dsets.train[0]
(PILImage mode=RGB size=500x375, TensorCategory(1))
dsets.vocab
(#2) ['cat','dog']
Note that you can mix and match any block for input and targets, which is why the API is named data block API. You can also have more than two blocks (if you have multiple inputs and/or targets), you would just need to pass
n_inp to the
DataBlock to tell the library how many inputs there are (the rest would be targets) and pass a list of functions to
get_x and/or
get_y (to explain how to process each item to be ready for his type). See the object detection below for such an example.
The next step is to control how our validation set is created. We do this by passing a
splitter to
DataBlock. For instance, here is how to do a random split.
dblock = DataBlock(blocks = (ImageBlock, CategoryBlock), get_items = get_image_files, get_y = label_func, splitter = RandomSplitter()) dsets = dblock.datasets(path/"images") dsets.train[0]
(PILImage mode=RGB size=500x335, TensorCategory(0))
The last step is to specify item transforms and batch transforms (the same way we do it in
ImageDataLoaders factory methods):
dblock = DataBlock(blocks = (ImageBlock, CategoryBlock), get_items = get_image_files, get_y = label_func, splitter = RandomSplitter(), item_tfms = Resize(224))
With that resize, we are now able to batch items together and can finally call
dataloaders to convert our
DataBlock to a
DataLoaders object:
dls = dblock.dataloaders(path/"images") dls.show_batch()
The way we usually build the data block in one go is by answering a list of questions:
- what is the types of your inputs/targets? Here images and categories
- where is your data? Here in filenames in subfolders
- does something need to be applied to inputs? Here no
- does something need to be applied to the target? Here the
label_funcfunction
- how to split the data? Here randomly
- do we need to apply something on formed items? Here a resize
- do we need to apply something on formed batches? Here no
This gives us this design:
dblock = DataBlock(blocks = (ImageBlock, CategoryBlock), get_items = get_image_files, get_y = label_func, splitter = RandomSplitter(), item_tfms = Resize(224))
For two questions that got a no, the corresponding arguments we would pass if the anwser was different would be
get_x and
batch_tfms.
Let's begin with examples of image classification problems. There are two kinds of image classification problems: problems with single-label (each image has one given label) or multi-label (each image can have multiple or no labels at all). We will cover those two kinds here.
from fastai.vision.all import *
MNIST is a dataset of hand-written digits from 0 to 9. We can very easily load it in the data block API by answering the following questions:
- what are the types of our inputs and targets? Black and white images and labels.
- where is the data? In subfolders.
- how do we know if a sample is in the training or the validation set? By looking at the grandparent folder.
- how do we know the label of an image? By looking at the parent folder.
In terms of the API, those answers translate like this:
mnist = DataBlock(blocks=(ImageBlock(cls=PILImageBW), CategoryBlock), get_items=get_image_files, splitter=GrandparentSplitter(), get_y=parent_label)
Our types become blocks: one for images (using the black and white
PILImageBW class) and one for categories. Searching subfolder for all image filenames is done by the
get_image_files function. The split training/validation is done by using a
GrandparentSplitter. And the function to get our targets (often called
y) is
parent_label.
To get an idea of the objects the fastai library provides for reading, labelling or splitting, check the
data.transforms module.
In itself, a data block is just a blueprint. It does not do anything and does not check for errors. You have to feed it the source of the data to actually gather something. This is done with the
.dataloaders method:
dls = mnist.dataloaders(untar_data(URLs.MNIST_TINY)) dls.show_batch(max_n=9, figsize=(4,4))
If something went wrong in the previous step, or if you're just curious about what happened under the hood, use the
summary method. It will go verbosely step by step, and you will see at which point the process failed.
mnist.summary(untar_data(URLs.MNIST_TINY))
Setting-up type transforms pipelines Collecting items from /home/jhoward/.fastai/data/mnist_tiny Found 1428 items 2 datasets of sizes 709,699 Setting up Pipeline: PILBase.create Setting up Pipeline: parent_label -> Categorize Building one sample Pipeline: PILBase.create starting from /home/jhoward/.fastai/data/mnist_tiny/train/7/723.png applying PILBase.create gives PILImageBW mode=L size=28x28 Pipeline: parent_label -> Categorize starting from /home/jhoward/.fastai/data/mnist_tiny/train/7/723.png applying parent_label gives 7 applying Categorize gives TensorCategory(1) Final sample: (PILImageBW mode=L size=28x28, TensorCategory(1)) Setting up after_item: Pipeline: ToTensor Setting up before_batch: Pipeline: Setting up after_batch: Pipeline: IntToFloatTensor Building one batch Applying item_tfms to the first sample: Pipeline: ToTensor starting from (PILImageBW mode=L size=28x28, TensorCategory(1)) applying ToTensor gives (TensorImageBW of size 1x28x28, TensorCategory(1)) Adding the next 3 samples No before_batch transform to apply Collating items in a batch Applying batch_tfms to the batch built Pipeline: IntToFloatTensor starting from (TensorImageBW of size 4x1x28x28, TensorCategory([1, 1, 1, 1], device='cuda:5')) applying IntToFloatTensor gives (TensorImageBW of size 4x1x28x28, TensorCategory([1, 1, 1, 1], device='cuda:5'))
Let's go over another example!
The Oxford IIIT Pets dataset is a dataset of pictures of dogs and cats, with 37 different breeds. A slight (but very) important difference with MNIST is that images are now not all of the same size. In MNIST they were all 28 by 28 pixels, but here they have different aspect ratios or dimensions. Therefore, we will need to add something to make them all the same size to be able to assemble them together in a batch. We will also see how to add data augmentation.
So let's go over the same questions as before and add two more:
- what are the types of our inputs and targets? Images and labels.
- where is the data? In subfolders.
- how do we know if a sample is in the training or the validation set? We'll take a random split.
- how do we know the label of an image? By looking at the parent folder.
- do we want to apply a function to a given sample? Yes, we need to resize everything to a given size.
- do we want to apply a function to a batch after it's created? Yes, we want data augmentation.
pets = DataBlock(blocks=(ImageBlock, CategoryBlock), get_items=get_image_files, splitter=RandomSplitter(), get_y=Pipeline([attrgetter("name"), RegexLabeller(pat = r'^(.*)_\d+.jpg$')]), item_tfms=Resize(128), batch_tfms=aug_transforms())
And like for MNIST, we can see how the answers to those questions directly translate in the API. Our types become blocks: one for images and one for categories. Searching subfolder for all image filenames is done by the
get_image_files function. The split training/validation is done by using a
RandomSplitter. The function to get our targets (often called
y) is a composition of two transforms: we get the name attribute of our
Path filenames, then apply a regular expression to get the class. To compose those two transforms into one, we use a
Pipeline.
Finally, We apply a resize at the item level and
aug_transforms() at the batch level.
dls = pets.dataloaders(untar_data(URLs.PETS)/"images") dls.show_batch(max_n=9)
Now let's see how we can use the same API for a multi-label problem.
The Pascal dataset is originally an object detection dataset (we have to predict where some objects are in pictures). But it contains lots of pictures with various objects in them, so it gives a great example for a multi-label problem. Let's download it and have a look at the data:
pascal_source = untar_data(URLs.PASCAL_2007) df = pd.read_csv(pascal_source/"train.csv")
df.head()
So it looks like we have one column with filenames, one column with the labels (separated by space) and one column that tells us if the filename should go in the validation set or not.
There are multiple ways to put this in a
DataBlock, let's go over them, but first, let's answer our usual questionnaire:
- what are the types of our inputs and targets? Images and multiple labels.
- where is the data? In a dataframe.
- how do we know if a sample is in the training or the validation set? A column of our dataframe.
- how do we get an image? By looking at the column fname.
- how do we know the label of an image? By looking at the column labels.
- do we want to apply a function to a given sample? Yes, we need to resize everything to a given size.
- do we want to apply a function to a batch after it's created? Yes, we want data augmentation.
Notice how there is one more question compared to before: we wont have to use a
get_items function here because we already have all our data in one place. But we will need to do something to the raw dataframe to get our inputs, read the first column and add the proper folder before the filename. This is what we pass as
get_x.
pascal = DataBlock(blocks=(ImageBlock, MultiCategoryBlock), splitter=ColSplitter(), get_x=ColReader(0, pref=pascal_source/"train"), get_y=ColReader(1, label_delim=' '), item_tfms=Resize(224), batch_tfms=aug_transforms())
Again, we can see how the answers to the questions directly translate in the API. Our types become blocks: one for images and one for multi-categories. The split is done by a
ColSplitter (it defaults to the column named
is_valid). The function to get our inputs (often called
x) is a
ColReader on the first column with a prefix, the function to get our targets (often called
y) is
ColReader on the second column, with a space delimiter. We apply a resize at the item level and
aug_transforms() at the batch level.
dls = pascal.dataloaders(df) dls.show_batch() | https://docs.fast.ai/tutorial.datablock.html | 2020-11-24T03:50:34 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.fast.ai |
rfc3339-old: RFC 3339 Timestamps
1 Introduction
2 Record Type
3 Parsing
4 Formatting
5 Validation
6 SRFI-19 Interoperability
7 History
- Version 2:0 —
2016-02-29
Moving from PLaneT to new package system.
Removed dependency on SRFI-9, while keeping same interface.
- Version 1:1 —
2009-03-03
License is now LGPL 3.
Converted to author’s new Scheme administration system.
Changes for PLT 4.x.
- Version 1:0 —
2005-12-05
Release for PLT 299/3xx.
Changed portability note in light of Pregexp post-1e9 bug fix.
Minor documentation changes.
- Version 0.1 —
2005-01-30
Initial release.
8 Legal
Copyright 2005, 2009,. | https://docs.racket-lang.org/rfc3339-old/index.html | 2020-11-24T03:32:39 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.racket-lang.org |
Restore data using a CSV file
Overview
If you wish to restore specific records in a Salesforce org, then a CSV-based restore is the option for you. You can use a CSV containing record IDs to restore specific records to any org registered with inSync. This option can prove effective by performing a cross-org restore to populate data in an org created for testing or training purpose.
inSync app for Salesforce analyzes the record IDs from the CSV and restores only the valid records from the list. You can also download the list of invalid record IDs in a CSV from a link on the app.
Cross-org restore helps you to:
- Restore data to the same or another org registered on the inSync Management Console.
- Restore data to a read-only (sandbox) org, which may be used for development, testing, and/or training purposes.
- Restore child records up to 5 levels.
CSV guidelines
The CSV must comply with the following guidelines for a successful restore. You can also download a sample CSV from the Restore tab of the inSync app for Salesforce.
- The record count must not exceed 15,000.
- The object name and its record ID must be in adjacent columns as specified in the sample CSV.
- inSync validates and restores the records listed in the CSV from top to bottom. Hence, if the CSV has both parent and child records, the parent record ID must appear above the child record ID but in no particular order.
- If you plan to restore all the child records, you need not provide the child record IDs in the CSV. By selecting the Include deleted child records option, you can restore the child records up to 5 levels.
- Each record ID must be of 18 characters.
- Records from File object (ContentDocument and ContentDocumentLink) cannot be restored using CSV. Instead, use the Restore by Snapshot or Compare options for the restore. Records from both the objects are restored automatically if their parent records are restored using the CSV.
Download the sample CSV
- Launch the inSync app for Salesforce and go to RESTORE > BULK tab.
- Select Restore using CSV.
- Click the Download Sample CSV link.
A Sample.csv file gets downloaded. Update this CSV file with the IDs of the records that you wish to restore and keep the CSV ready when you perform the CSV-based restore.
Restore data using a CSV file
Prerequisite: A CSV containing object names and their respective record IDs that you wish to restore.
To restore data using a CSV file:
- Launch the inSync app for Salesforce.
- Open the RESTORE tab and go to the BULK tab.
- Select Restore using CSV and from the Select Snapshot field, specify the snapshot from which you want to restore data.
- Click Upload CSV File and upload the CSV containing the record IDs.
inSync app for Salesforce analyzes the CSV and displays the list of records that are marked for restore along with their respective object names.
In addition, the following details are also displayed on the app:
- Records Uploaded - Number of records (IDs) listed in the uploaded CSV.
- Records Marked for Restore - Number of records validated for restore.
- Records not Found in Snapshot - Number of records that failed the validation and cannot be restored. inSync validates the records by matching the record IDs from the CSV with the records of the selected snapshot. You can download the list of invalid record IDs from Download Missing Record List.
- Download Missing Record List - Link to download a CSV containing list of record IDs that cannot be restored. Use Chrome or Firefox browser to download the Missing Records list. When using Firefox, ensure you update the browser’s configuration settings recommended by Salesforce.
- Click Restore. The Restore Location dialog box is displayed.
The Select an Organization field displays a list of connected orgs on which you can restore the data. By default, the source org of the selected snapshot is displayed first in the list.
- From the Select an Organization, select one of the registered orgs as the restore destination and click Next.
The Confirm Restore dialog is displayed.
- Review the information on the Confirm Restore dialog box and select the appropriate options based on the field description below.
- Overwrite any existing data in destination organization:
- inSync matches the record ID before overwriting a record on the destination org.
- Without a matching record ID, no record is overwritten or created.
- If not selected, inSync creates new entries for those records that are not present in the org, whereas the existing records are retained as is.
For example: From records A, B, and C marked for restore, if A and B are already present in the org, only C is restored as a new record.
A, B, and C are example values used for simplicity; inSync identifies records by their 18-digit record IDs.
- Include deleted child records: inSync identifies the child records from the snapshot and restores the child records up to 5 levels. Deleted child records are also restored.
- Exclude deleted child records: inSync restores only the parent records.
The points displayed under Please ensure the following are already covered in the Before you initiate a cross-org restore section.
- After you select the appropriate options and review the information on the Confirm Restore dialog box, click Confirm.
The status and progress of the restore is displayed under Recent Activity on the Dashboard tab and on the Activity Stream tab.
A Restore Successful status is displayed at the end of the restore. | https://docs.druva.com/001_inSync_Cloud/Cloud/050_Cloud_Apps/How_to_integrate_with_Salesforce/010Druva_App_for_Salesforce/Restore_data_using_a_CSV_file | 2020-11-24T03:13:51 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.druva.com |
- builds
Testing against multiple versions of PHP is super easy. Just add another job with a different Docker image version and the runner does builds
There are times whereor
The shell executor runs your job in a terminal session on your server. To test your projects, you must first ensure that all dependencies are installed.
For example, in a VM running Debian 8, first update the cache, and then builds
The phpenv project allows you to easily manage different versions of PHP each with its own configuration. This is especially useful when testing PHP projects with the Shell executor. works
with the basic phpenv commands. Guiding you to choose the right phpenv is out
of the scope of this tutorial.
Install. | https://docs.gitlab.com/ee/ci/examples/php.html | 2020-11-24T04:36:02 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.gitlab.com |
JSON-delta by example¶
Consider the example JSON-LD entry for John Lennon from:
{ "@context": "", "@id": "", "name": "John Lennon", "born": "1940-10-09", "spouse": "" }
Suppose we have a piece of software that updates this record to show his date of death, like so:
{ "@context": "", "@id": "", "name": "John Lennon", "born": "1940-10-09", "died": "1980-12-07", "spouse": "" }
Further suppose that we wish to communicate this update to another piece of software whose only job is to store information about John Lennon in JSON-LD format. (Yes, I know this is getting unlikely, but stay with me.) If this Lennon-record-keeper accepts updates in json-delta format, all you have to do is send the following over the wire:
[[["died"],"1980-12-07"]]
This is a complete diff in json-delta format. It is itself a
JSON-serializable data structure: specifically, it is a sequence of
what I refer to as diff stanzas for some reason. The
format for a diff stanza is
[<key path>, (<update>)] (The
parentheses mean that the
<update> part is optional. I’ll get
to that in a minute). A key path is a sequence of keys specifying
where in the data structure the node you want to alter is found, much
like those emitted by JSON.sh. The stanza may be thought of as an
instruction to update the node found at that path so that its content
is equal to
<update>.
Now, let’s do some more supposing. Suppose the software we’re communicating with is dedicated to storing information about the Beatles in general. Also, suppose we’ve remembered that it was actually on the 8th of December 1980 that John Lennon died, not the 7th. Finally, suppose we live in an Orwellian dystopia, and Cynthia Lennon has been declared a non-person who must be expunged from all records. Unfortunately, json-delta is incapable of overthrowing corrupt and despotic governments, so let’s make one last supposition, that what we’re interested in is updating the record kept by the software on the other end of the wire, which looks like this:
[ { "@context": "", "@id": "", "name": "John Lennon", "born": "1940-10-09", "died": "1980-12-07", "spouse": "" }, {"name": "Paul McCartney"}, {"name": "George Harrison"}, {"name": "Ringo Starr"} ]
(Allegations of bias in favor of specific Beatles on the part of the maintainer of this record are punished by the aforementioned despotic government. All glory to Arstotzka!)
To make the changes we’ve decided on (correcting John’s date of death, and expunging Cynthia Lennon from the record), we need to send the following sequence:
[ [[0, "died"], "1980-12-08"], [[0, "spouse"]] ]
Now, of course, you see what I meant when I said I’d tell you why
<update> is optional later. If a stanza includes no update material,
it is interpreted as an instruction to delete the node the key-path
points to.
Note also that there is no difference between a stanza that adds a node, and one that changes one.
The intention is to save as much communications bandwidth as possible
without sacrificing the ability to communicate arbitrary modifications
to the data structure (this format can be used to describe a change
from any JSON-serialized object into any other). The worst-case
scenario, where there is no commonality between the two structures, is
that the protocol adds seven octets of overhead, because a diff can
always be expressed as
[[[],<target>]], meaning “substitute
<target> for the data structure that is to be modified”. | https://json-delta.readthedocs.io/en/latest/philosophy.html | 2020-11-24T04:14:42 | CC-MAIN-2020-50 | 1606141171077.4 | [] | json-delta.readthedocs.io |
You can add VMware HCX as a data source in vRealize Network Insight.
For L2 extension, you must always add the VMware HCX for the source data center. For example, if you have a single stretch network, from data center (DC) 1 to DC 2, then you must add VMware HCX of the DC 1 in vRealize Network Insight to get the flow details. For a L shaped stretch, where your extension is between DC 1 to DC 2 and from DC 2 to DC 3, then you must add VMware HCX of the DC 1 and 2 in vRealize Network Insight.
Procedure
- On the Settings page, click Accounts and Data Sources.
- Click Add Source.
- Under VMware Managers, click VMware HCX.
- In the Add a New VMware HCX Account or Source page, provide the required information.
- Click Validate.
- In the Nickname text box, enter a nickname.
- (Optional) In the Notes text box, you can add a note if necessary.
- Click Submit. | https://docs.vmware.com/en/VMware-vRealize-Network-Insight-Cloud/services/com.vmware.vrni.using.doc/GUID-AC1ADD3B-C236-45AE-94D2-6EDA6722CEE8.html | 2020-11-24T07:47:16 | CC-MAIN-2020-50 | 1606141171126.6 | [] | docs.vmware.com |
Using C++ in Mozilla code¶
C++ language features¶ or throw any exceptions. Libraries that throw
exceptions may be used if you are willing to have the throw instead be
treated as an abort.
On the side of extending C++, we compile with
-fno-strict-aliasing.
This means that when reinterpreting a pointer as a differently-typed
pointer, you don’t need to adhere to the “effective type” (of the
pointee) rule from the standard (aka. “the strict aliasing rule”) when
dereferencing the reinterpreted pointer. You still need make sure that
you don’t violate alignment requirements and need to make sure that the
data at the memory location pointed to forms a valid value when
interpreted according to the type of the pointer when dereferencing the
pointer for reading. Likewise, if you write by dereferencing the
reinterpreted pointer and the originally-typed pointer might still be
dereferenced for reading, you need to make sure that the values you
write are valid according to the original type. This value validity
issue is moot for e.g. primitive integers for which all bit patterns of
their size are valid values.
As of Mozilla 59, C++14 mode is required to build Mozilla.
As of Mozilla 67, MSVC can no longer be used to build Mozilla.
As of Mozilla 73, C++17 mode is required to build Mozilla.
This means that C++17 can be used where supported on all platforms. The list of acceptable features is given below:
Sources¶
Notes¶
rvalue references: Implicit move method generation cannot be used.
Attributes: Several common attributes are defined in mozilla/Attributes.h or nscore.h.
Alignment: Some alignment utilities are defined in mozilla/Alignment.h. /!\ MOZ_ALIGNOF and alignof don’t have the same semantics. Be careful of what you expect from them.
[[deprecated]]: If we have deprecated code, we should be removing it
rather than marking it as such. Marking things as
[[deprecated]]
also means the compiler will warn if you use the deprecated API, which
turns into a fatal error in our automation builds, which is not helpful.
Sized deallocation: Our compilers all support this (custom flags are
required for GCC and Clang), but turning it on breaks some classes’
operator new methods, and some
work would
need to be done to make it an efficiency win with our custom memory
allocator.
Aligned allocation/deallocation: Our custom memory allocator doesn’t have support for these functions.
Thread locals:
thread_local is not supported on Android.
C++ and Mozilla standard libraries¶
The Mozilla codebase contains within it several subprojects which follow different rules for which libraries can and can’t be used it. The rules listed here apply to normal platform code, and assume unrestricted usability of MFBT or XPCOM APIs.
Warning
The rest of this section is a draft for expository and exploratory purposes. Do not trust the information listed here.
What follows is a list of standard library components provided by Mozilla or the C++ standard. If an API is not listed here, then it is not permissible to use it in Mozilla code. Deprecated APIs are not listed here. In general, prefer Mozilla variants of data structures to standard C++ ones, even when permitted to use the latter, since Mozilla variants tend to have features not found in the standard library (e.g., memory size tracking) or have more controllable performance characteristics.
A list of approved standard library headers is maintained in config/stl-headers.mozbuild.
Strings¶
See the Mozilla internal string
guide for
usage of
nsAString (our copy-on-write replacement for
std::u16string) and
nsACString (our copy-on-write replacement
for
std::string).
Be sure not to introduce further uses of
std::wstring, which is not
portable! (Some uses exist in the IPC code.)
Mozilla data structures and standard C++ ranges and iterators¶
Some Mozilla-defined data structures provide STL-style iterators and are usable in range-based for loops as well as STL algorithms.
Currently, these include:
Note that if the iterator category is stated as “missing”, the type is probably only usable in range-based for. This is most likely just an omission, which could be easily fixed.
Useful in this context are also the class template
IteratorRange
(which can be used to construct a range from any pair of iterators) and
function template
Reversed (which can be used to reverse any range),
both defined in
mfbt/ReverseIterator.h
Further C++ rules¶
Don’t use static constructors¶
¶
See the introduction to the “C++ language features” section at the start of this document.
Don’t use Run-time Type Information¶
See the introduction to the “C++ language features” section at the start of this document.
If you need runtime typing, you can achieve a similar result by adding a
classOf() virtual member function to the base class of your
hierarchy and overriding that member function in each subclass. If
classOf() returns a unique value for each class in the hierarchy,
you’ll be able to do type comparisons at runtime.
Don’t use the C++ standard library (including iostream and locale)¶
See the section “C++ and Mozilla standard libraries”.
Use C++ lambdas, but with care¶
C++ lambdas are supported across all our compilers now. Rejoice! We recommend explicitly listing out the variables that you capture in the lambda, both for documentation purposes, and to double-check that you’re only capturing what you expect to capture.
Use namespaces¶
Namespaces may be used according to the style guidelines in C++ Coding style.
Make header files compatible with C and C++¶ #include "oldCheader.h" ...
There are number of reasons for doing this, other than just good style. For one thing, you are making life easier for everyone else, doing the work in one common place (the header file) instead of all the C++ files that include it. Also, by making the C header safe for C++, you document that “hey, this file is now being included in C++”. That’s a good thing. You also avoid a big portability nightmare that is nasty to fix…
Use override on subclass virtual member functions¶
The
override keyword is supported in C++11 and in all our supported
compilers, and it catches bugs.
Always declare a copy constructor and assignment operator¶¶¶
Non-portable code:
class FooClass { // having such similar signatures // is a bad idea in the first place. void doit(long); void doit(short); }; void B::foo(FooClass* xyz) { xyz->doit(45); }
Be sure to type your scalar constants, e.g.,
uint32_t(10) or
10L. Otherwise, you can produce ambiguous function calls which
potentially could resolve to multiple methods, particularly if you
haven’t followed (2) above. Not all of the compilers will flag ambiguous
method calls.
Portable code:
class FooClass { // having such similar signatures // is a bad idea in the first place. void doit(long); void doit(short); }; void B::foo(FooClass* xyz) { xyz->doit(45L); }
Use nsCOMPtr in XPCOM code¶
See the
nsCOMPtr User
Manual for
usage details.
Don’t use identifiers that start with an underscore¶
This rule occasionally surprises people who’ve been hacking C++ for decades. But it comes directly from the C++ standard!
According to the C++ Standard, 17.4.3.1.2 Global Names [lib.global.names], paragraph 1:.
Stuff that is good to do for C or C++¶
Avoid conditional #includes when possible¶¶
Every object file linked into libxul needs to have a unique name. Avoid generic names like nsModule.cpp and instead use nsPlacesModule.cpp.
Turn on warnings for your compiler, and then write warning free code¶¶
Some compilers do not pack the bits when different bitfields are given different types. For example, the following struct might have a size of 8 bytes, even though it would fit in 1:
struct { char ch: 1; int i: 1; };
Don’t use an enum type for a bitfield¶. | https://firefox-source-docs.mozilla.org/code-quality/coding-style/using_cxx_in_firefox_code.html | 2020-11-24T07:08:10 | CC-MAIN-2020-50 | 1606141171126.6 | [] | firefox-source-docs.mozilla.org |
- )
Release Version 1.39 (Feb-2020)
ON THIS PAGE
Data Sources
Data Destinations
Enhanced Google BigQuery destination set up to provide users the ability to create dataset and GCS bucket as part of the setup
Enhanced handling of long text columns for MySQL and Microsoft SQL Server destinations
UI
- Enhanced pipeline jobs management to provide users the ability to perform collective actions across multiple jobs with a single click
- Activity Log enhancements
- Enhanced Model Activity Logs to show model run information (status information and failure details, if any)
Last updated on 26 Feb 2020
Was this page helpful?
Thank you for helping improve Hevo's documentation. If you need help or have any questions, please consider contacting support. | https://docs.hevodata.com/release-notes/v1.39/ | 2020-11-24T07:12:15 | CC-MAIN-2020-50 | 1606141171126.6 | [array(['/assets/tasks-bulk-action-5f878049fa5ead5018a0aa244cf7c99176ab99a287e6ab60b6f206af222c3ecf.png',
'Jobs Bulk Actions'], dtype=object) ] | docs.hevodata.com |
How to troubleshoot AD FS endpoint connection issues when users sign in to Office 365, Intune, or Azure
Note
Office 365 ProPlus is being renamed to Microsoft 365 Apps for enterprise. For more information about this change, read this blog post.
Problem situation also causes SSO testing that the Remote Connectivity Analyzer conducts to fail.
For more information about how to run the Remote Connectivity Analyzer to test SSO authentication in Office 365, see the following articles in the Microsoft Knowledge Base:
- 2650717 How to use Remote Connectivity Analyzer to troubleshoot single sign-on issues for Office 365, Azure, or Intune
- 2466333 Federated users can't connect to an Exchange Online mailbox
Cause
These failures can occur if the AD FS service isn't exposed correctly to the Internet. Typically, the AD FS proxy server is used for this purpose, and problems with the AD FS proxy server will cause these symptoms. Common problems include the following:
Expired SSL certificate that's assigned to the AD FS proxy server
Frequently, the same SSL certificate is used to help secure communication (HTTPS) for both the AD FS Federation Service and the AD FS proxy server. When this certificate becomes expired and the certificate is renewed or updated on the AD FS Federation Service farm, the SSL certificate must also be updated on all AD FS proxy servers. If the AD FS proxy server SSL certificate isn't updated in this case, Internet connections to the AD FS service may fail, even though the AD FS Federation Service is healthy.
Incorrect configuration of IIS authentication endpoints
The role of the AD FS proxy server is to receive Internet communication that's directed at AD FS and to relay that communication to the AD FS Federation Service. Therefore, it's important for the IIS authentication setting of the AD FS Federation Service and proxy server to be complementary. When the AD FS proxy server IIS authentication settings aren't set to complement the AD FS Federation Service IIS authentication settings, sign-in may fail or may generate multiple, unexpected prompts.
Broken trust between the AD FS proxy server and the AD FS Federation Service
The AD FS proxy service is designed to be installed on a non-domain joined computer. Therefore, the communication between the AD FS proxy server and the AD FS Federation Service can't be based on an Active Directory trust or credentials. Instead, the communication between these two server roles is established by using a token that is issued to the AD FS proxy server by the AD FS Federation Service and signed by the AD FS token-signing certificate. When this trust is expired or invalid, the AD FS Proxy Service can't relay AD FS requests, and the trust must be rebuilt to restore functionality.
Solution
To resolve this issue, use one of the following methods, as appropriate for your situation, on all malfunctioning AD FS proxy servers.
Method 1: Fix AD FS SSL certificate issues on the AD FS server
To do this, follow these steps:
Troubleshoot SSL certificate problems on the AD FS Federation Service (not the Proxy Service) by using the following Microsoft Knowledge Base article:
2523494 You receive a certificate warning from AD FS when you try to sign in to Office 365, Azure, or Intune
If the AD FS Federation Service SSL certificate is functioning correctly, update the SSL certificate on the AD FS proxy server by using the certificate export and import functions. For more info, see the following Microsoft Knowledge Base article:
179380 How to remove, import, and export digital certificates
Method 2: Reset the AD FS proxy server IIS authentication settings to default
To do this, follow the steps that are described in Resolution 1 of the following Microsoft Knowledge Base article for the AD FS proxy server:
2461628 A federated user is repeatedly prompted for credentials during sign-in to Office 365, Azure, or Intune
Method 3: Rerun the AD FS Proxy Configuration wizard
To do this, rerun the AD FS Federation Server Proxy Configuration Wizard from the Administrative Tools interface of all affected AD FS proxy servers.
Note
It's usual to receive a warning from the "Deploy browser sign-in Web site" step when you rerun the configuration wizard. This isn't an indication that the wizard did not rebuild the trust between the AD FS proxy server and the AD FS Federation Service.
More information
For more info about how to expose the AD FS service to the Internet by using an AD FS proxy server, go to the following Microsoft website:
Plan for and deploy AD FS 2.0 for use with single sign-on
Still need help? Go to Microsoft Community. | https://docs.microsoft.com/en-us/office365/troubleshoot/active-directory/ad-fs-endpoint-connection-issue | 2020-11-24T07:58:17 | CC-MAIN-2020-50 | 1606141171126.6 | [] | docs.microsoft.com |
Message-ID: <475740792.15780.1606200222923.JavaMail.j2ee-conf@bmc1-rhel-confprod1> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_15779_1730973270.1606200222923" ------=_Part_15779_1730973270.1606200222923 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
When you install the graphing server with the CDP, the installation pro=
gram configures the communication settings between the CDP and the graphing=
server. It also configures the proper settings if you specify the graphing=
server properties when you install the peer without the graphing server. <=
br class=3D"atl-forced-newline">
However,= if one of the following conditions occurs, the CDP cannot communicate with= the graphing server, and the graphing server fails to start:
com.bm= c.ao.metrics.graphing.externalServerAddress=3D com.bmc.ao.metrics.graphing.internalCommPortAddress=3D=20
Example
If the host name is calbro.server.com and the port number is 28080, you = would type ograph/server for both properties. | https://docs.bmc.com/docs/exportword?pageId=73109784 | 2020-11-24T06:43:42 | CC-MAIN-2020-50 | 1606141171126.6 | [] | docs.bmc.com |
The Backstage Actions view is a set of commands that can be used to quickly approach major SPDocKit options.
The following options are available:
Take Snapshot button loads SharePoint farm data if an on-premises farm is detected. This option is only available on SharePoint servers.
View Snapshots button will take you to all your saved snapshots, and you can select any of them to open and view data.
Compare Wizard button starts the compare wizard that allows you to compare farms, Web applications, site collections, web config files, and permissions.
Options button gives you access to configuration options for adjusting SPDocKit to your needs.
Permissions Explorer leads you directly to SharePoint permissions, both live and historical records, collected by SPDocKit.
Permissions Wizards button gives you quick access to all permissions management options available.
Permissions Reports button will show you a set of permissions reports.
Analytics & Usage button will take you to the Analytics & Usage Reports. The Site Collection Analytics report is preselected, where you can see popularity trends and hits history for each site collection.
Audit Logs button will take you to the Audit Log Details report, where we'll show you a complete audit log on the selected site collections in a given time period.
Use the Queries and Rules button to open and create desired procedures or reports to enforce your SharePoint Governance policies.
Note that this option is not available on a workstation. | https://docs.syskit.com/spdockit/configure-and-extend-spdockit/backstage-screen | 2020-11-24T07:22:32 | CC-MAIN-2020-50 | 1606141171126.6 | [] | docs.syskit.com |
Timeout
Exception Class
Definition
The exception that is thrown when a specified timeout has expired.
public ref class TimeoutException : SystemException
[System.Serializable] public class TimeoutException : SystemException
type TimeoutException = class inherit SystemException
Public Class TimeoutException Inherits SystemException
- Inheritance
- TimeoutException
- Attributes
-
Remarks
The TimeoutException class can specify a message to describe the source of the exception. When a method throws this exception, the message is usually "The timeout provided has expired and the operation has not been completed."
This class is used, for example, by the ServiceController class's WaitForStatus member. The operation that can throw the exception is a change of the service's Status property (for example, from
Paused to
ContinuePending). | https://docs.microsoft.com/en-us/dotnet/api/system.serviceprocess.timeoutexception?redirectedfrom=MSDN&view=netframework-4.8 | 2019-11-11T22:21:02 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.microsoft.com |
Store and Category Manager interface is available if you have the Power Add-on installed (for WPSLP users) or have the MYSLP Professional level or higher. You will see a tab “Categories” in your Store Locator Plus User Interface.
Once you open the Category Tab you can begin to add categories and slugs. Set Markers and Icons per category (optional) if you want those to appear in place of map markers. A Bulk action drop down to delete some or all categories is also available.
Settings/Search
You can choose to show the categories that you have created in a drop down menu as part of the search form. There are additional category selector options available for WPSLP premier subscribers or MYSLP Enterprise subscribers (example see Checkbox selector). You can add text as a label that will display at the top of the categories drop down menu, or you can leave blank. You can also add text for the category select box that will appear in front of the category drop down on the user interface.
Hide Empty Categories Setting
For WPSLP Power add-on users . Do NOT turn this on unless you have Pages enabled.
Legend | https://docs.storelocatorplus.com/store-and-category-manager/?shared=email&msg=fail | 2019-11-11T23:41:01 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.storelocatorplus.com |
Content¶
- Rough TODO
- Changes
- Commandline framework
- Config use and implementation notes
- Checking the source out
- Installing pkgcore
- Ebuild EAPI
- Feature (FEATURES) categories
- Filesystem Operations
- Python Code Guidelines
- Follow pep8, with following exemptions
- Throw self with a NotImplementedError
- Be aware of what the interpreter is actually doing
- Don’t explicitly use has_key. Rely on the ‘in’ operator
- Do not use [] or {} as default args in function/method definitions
- Visible curried functions should have documentation
- Unit testing
- If it’s FS related code, it’s _usually_ cheaper to try then to ask then try
- Catching Exceptions in python code (rather then cpython) isn’t cheap
- cpython ‘leaks’ vars into local namespace for certain constructs
- Unless you need to generate (and save) a range result, use xrange
- Removals from a list aren’t cheap, especially left most
- If you’re testing for None specifically, be aware of the ‘is’ operator
- Deprecated/crappy modules
- Know the exceptions that are thrown, and catch just those you’re interested in
- tuples versus lists.
- Don’t try to copy immutable instances (e.g. tuples/strings)
- __del__ methods mess with garbage collection
- A general point: python isn’t slow, your algorithm is
- What’s up with __hash__ and dicts
- __eq__ and __ne__
- __eq__/__hash__ and subclassing
- Exception subclassing
- Memory debugging
- resolver
- resolver redesign
- config/use issues
- How to use guppy/heapy for tracking down memory usage
- Plugins system
- Pkgcore/Portage differences
- Tackling domain
- Testing
- perl CPAN
- dpkg
- WARNING
- Introduction | http://pkgcore.readthedocs.io/en/latest/dev-notes.html | 2017-11-17T21:20:07 | CC-MAIN-2017-47 | 1510934803944.17 | [] | pkgcore.readthedocs.io |
Several sample scenarios of the WSO2 Message Broker are explained in this section. These samples can be used as references to build your own application using various features of the Message Broker. The samples shipped with WSO2 Message Broker are stored in the
<CARBON_HOME>/samples directory.
See the following topics for details:
About MB samples
The MB samples use several sample clients. For example, to illustrate a scenario where messages are published and received by subscribers, we need two clients; one for publishing messages to MB and another for subscribing and receiving messages.
These clients are defined via classes that are saved in the separate folders dedicated for each sample in the
<MB_HOME>/samples directory. A sample may have more than one client to perform different actions. Since there are interdependencies between different clients used in a sample, the classes defining them need to be bound to each other. To achieve this, the sample folder where the classes of the clients are stored, also contains a class named
Main.class, which defines the method for calling the other classes in the same directory.
For example, consider the "JMSQueueClient" sample, which you will find in the
<MB_HOME>/samples directory. If you see the contents of the JMSQueueClient sample folder, you will find the following classes (inside the
JMSQueueClient/target/classes/org/sample/jms folder):
SampleQueueReceiver.class: The class file defining the JMS client for receiving messages from the queue.
SampleQueueSender.class: The class file defining the JMS client for sending messages to the queue.
Main.class: The main class that binds the two client classes.
Setting up and running the samples
See the following topics for instructions on how to setup and run the samples in WSO2 MB: | https://docs.wso2.com/display/MB320/Samples | 2017-11-17T20:51:28 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.wso2.com |
Download and run the latest version of the App Volumes installer to upgrade your App Volumes Manager.
About this task
Note the following points about upgrading App Volumes:
You can upgrade from App Volumes 2.12 to the latest version without uninstalling the 2.12 installation.
In App Volumes releases prior to 2.12, you had to uninstall the App Volumes Manager installation on your machine before you could upgrade to the latest version. Thus App Volumes Manager configuration details and settings were not retained and you had to reconfigure them. With the new upgrade feature, you can upgrade to the latest version without losing your settings.Important:
If you want to upgrade from a version earlier than App Volumes 2.12, you must uninstall that version before installing the latest version.
If you want to upgrade multiple App Volumes Managers which point to a central database, open services.msc and stop the App Volumes Manager service on each server. You must then run the installer on each server to upgrade App Volumes.
Prerequisites
Download the latest App Volumes installer from My VMware.
Schedule a maintenance window to ensure that there is no service degradation during the upgrade process.
Detach all volumes.
In the Windows Start menu, open Control Panel and click . Note down the database and server name defined in the system ODBC source svmanager.
Back up the App Volumes database using SQL Server tools.
Create a full server back up or snapshot of the App Volumes Manager server.
Procedure
- Log in as administrator on the machine where App Volumes Manager is installed.
- Locate the App Volumes installer that you downloaded and double-click the setup.exe file.
- Select the App Volumes Manager component and click Install.
A notification window with the upgrade process details is displayed.
- Click Next to confirm the upgrade.
- Click Install to begin the installation.
A Status Bar shows the progress of the installation. The installation process takes 5 to 10 minutes to complete. During this time, configuration information is first backed-up, new files are installed, and the configuration information is restored.
- Click Finish to complete the installation.
Results
App Volumes Manager is upgraded.
All certificates that you had previously configured are retained and you do not need to reconfigure them.
What to do next
Upgrade the App Volumes agent and templates. See Upgrade App Volumes Templates and Upgrade App Volumes Agent. | https://docs.vmware.com/en/VMware-App-Volumes/2.13/com.vmware.appvolumes.install.doc/GUID-D97EDB98-378D-442F-A884-7D02E47F6C48.html | 2017-11-17T22:02:18 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.vmware.com |
Creating Forms¶
Secure Form¶
Without any configuration, the
FlaskForm will be a session secure
form with csrf protection. We encourage you do nothing.
But if you want to disable the csrf protection, you can pass:
form = FlaskForm(csrf_enabled=False)
You can disable it globally—though you really shouldn't¶
The
FileField provided by Flask-WTF differs from the WTForms-provided
field. It will check that the file is a non-empty instance of
FileStorage, otherwise
data will be
None.
from flask_wtf import FlaskForm from flask_wtf.file import FileField, FileRequired from werkzeug.utils import secure_filename class PhotoForm(FlaskForm): photo = FileField(validators=[FileRequired()]) @app.route('/upload', methods=['GET', 'POST']) def upload(): if form.validate_on_submit(): f = form.photo.data filename = secure_filename(f.filename) f.save(os.path.join( app.instance_path, 'photos', filename )) return redirect(url_for('index')) return render_template('upload.html', form=form)
Remember to set the
enctype of the HTML form to
multipart/form-data, otherwise
request.files will be empty.
<form method="POST" enctype="multipart/form-data"> ... </form>
Flask-WTF handles passing form data to the form for you.
If you pass in the data explicitly, remember that
request.form must
be combined with
request.files for the form to see the file data.
form = PhotoForm() # is equivalent to: from flask import request from werkzeug.datastructures import CombinedMultiDict form = PhotoForm(CombinedMultiDict((request.files, request.form)))
Validation¶
Flask-WTF supports validating file uploads with
FileRequired and
FileAllowed. They can be used with both
Flask-WTF's and WTForms's
FileField classes.
FileAllowed works well with Flask-Uploads.
from flask_uploads import UploadSet, IMAGES from flask_wtf import FlaskForm from flask_wtf.file import FileField, FileAllowed, FileRequired images = UploadSet('images', IMAGES) class UploadForm(FlaskForm): upload = FileField('image', validators=[ FileRequired(), FileAllowed(images, 'Images only!') ])
It can be used without Flask-Uploads by passing the extensions directly.
class UploadForm(FlaskForm): upload = FileField('image', validators=[ FileRequired(), FileAllowed(['jpg', 'png'], 'Images only!') ])
Recaptcha¶
Flask-WTF also provides Recaptcha support through a
RecaptchaField:
from flask_wtf import FlaskForm, RecaptchaField from wtforms import TextField class SignupForm(FlaskForm): username = TextField('Username') recaptcha = RecaptchaField()
This comes together with a number of configuration, which you have to implement them.
Example of RECAPTCHA_PARAMETERS, and RECAPTCHA_DATA_ATTRS:
RECAPTCHA_PARAMETERS = {'hl': 'zh', 'render': 'explicit'} RECAPTCHA_DATA_ATTRS = {'theme': 'dark'}
For testing your application, if
app.testing is
True, recaptcha
field will always be valid for you convenience.
And it can be easily setup in the templates:
<form action="/" method="post"> {{ form.username }} {{ form.recaptcha }} </form>
We have an example for you: recaptcha@github. | https://flask-wtf.readthedocs.io/en/latest/form.html | 2017-11-17T21:05:41 | CC-MAIN-2017-47 | 1510934803944.17 | [] | flask-wtf.readthedocs.io |
SQLAlchemy 1.1 Documentation
SQLAlchemy 1.1 Documentation
current release
SQLAlchemy Core
- SQL Expression Language Tutorial
- SQL Statements and Expressions API
- Schema Definition Language
- Column and Data Types
- Column and Data Types
- Custom Types¶
- Overriding Type Compilation
- Augmenting Existing Types
- TypeDecorator Recipes
- Replacing the Bind/Result Processing of Existing Types
- Applying SQL-level Bind/Result Processing
- Redefining and Creating New Operators
- Creating New Types
- Base Type API
- Engine and Connection Use
- Core API Basics
Project Versions.sql.expression.SchemaEventTarget,” attribute is required, and can reference any TypeEngine class. Alternatively, the load_dialect_impl() method can be used to provide different type classes based on the dialect given; in this case, the “impl” variable
postgresql.JSONand
postgresql.
coerce_to_is_types= (<type 'NoneType'>,)¶
Specify those Python types which should be coerced at the expression level to “IS <constant>” when compared using
==(and same for.
New in version 0.8.2: Added
TypeDecorator.coerce_to_is_typesto_against_backend(dialect, conn_type)¶
- inherited from the
compare_against_backend()method of
TypeEngine.
dialect_impl(dialect)¶
- inherited from the
dialect_impl()method of
TypeEngine
Return a dialect-specific implementation for this
TypeEngine.
evaluates_none()¶
- inherited from the
evaluates_none()method of
TypeEngine DBAPI type object represented by this
TypeDecorator.
By default this calls upon
TypeEngine.get_dbapi_type()of the underlying “impl”.attribute of
TypeEngine.
type_engine(dialect)¶
Return a dialect-specific
TypeEngineinstance for this
TypeDecorator.
In most cases this returns a dialect-adapted form of the
TypeEnginetype represented by
self.impl. Makes usage of
dialect_impl()but also traverses into wrapped
TypeDecoratorinstances. encrypt
Bases:
sqlalchemy.types.TypeEngineby default, rather than falling onto the more fundamental behavior of
TypeEngine.coerce_compared_value(). | http://docs.sqlalchemy.org/en/rel_1_1/core/custom_types.html | 2017-11-17T20:52:17 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.sqlalchemy.org |
Community Tools¶
Some useful tools made by the community for the community
KinanCity¶
Node.js CORS proxy server¶
A simple CORS proxy server that supports HTTP(S) request proxying.¶
This script allows you to verify accounts using a CORS proxy.
Note 1: HTTPS is required for use with the gmail verification script. Self-signed certificates work, but remember to add them to your trusted certificates on your OS.
Note 2: An update to the gmail verification script is planned to stay below the Google API limits (even if you refresh), retry on 503, automatically sleep when no proxies are available, and continue immediately when a proxy becomes available. For now, you’ll have to refresh yourself.
PGM Multi Loc¶
Easily visualize locations on a map before scanning, and generate a customized launch script.¶
Add multiple scan locations on the map. Automatically convert an area to a beehive. Resize and move the location on the map. Disable individual hives to stop scanning a specific location.
Generate a customized launch script, with the ability to edit the templates used for the individual commands. Pass in a list of account information that contains usernames, passwords, proxies, etc. | http://rocketmap.readthedocs.io/en/latest/extras/Community-Tools.html | 2017-11-17T21:08:56 | CC-MAIN-2017-47 | 1510934803944.17 | [] | rocketmap.readthedocs.io |
Default value of fields¶
When a record is created, each field, which doesn’t have a value specified, is set with the default value if exists.
The following class method:
Model.default_<field name>()
Return the default value for
field name.
This example defines an
Item model which has a default
since:
import datetime from trytond.model import ModelView, ModelSQL, fields class Item(ModelSQL, ModelView): "Item" __name__ = 'item' since = fields.Date('since') @classmethod def default_since(cls): return datetime.date.today()
See also method
Model.default_get:
default_get | http://trytond.readthedocs.io/en/latest/topics/models/fields_default_value.html | 2017-11-17T21:00:06 | CC-MAIN-2017-47 | 1510934803944.17 | [] | trytond.readthedocs.io |
This workflow allows a delegated administrator to add unmanaged virtual machines to a manual desktop pool in View. The unmanaged machines are in fact managed by a vCenter instance, but the vCenter instance has not been added to View.
Note:
This workflow is not for adding physical machines or non-vSphere virtual machines. To add those types of machines, see Adding Physical Machines and Non-vSphere Virtual Machines to Pools. | https://docs.vmware.com/en/VMware-Horizon-6/6.1/com.vmware.using.horizon-vro-plugin.doc/GUID-DDEA8637-9F20-46BF-995E-59EA48B4EBE0.html | 2017-11-17T21:33:23 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.vmware.com |
Production¶
There are several steps needed to make Lemur production ready. Here we focus on making Lemur more reliable and secure.
Basics¶
Because of the sensitivity of the information stored and maintained by Lemur it is important that you follow standard host hardening practices:
- Run Lemur with a limited user
- Disabled any unneeded services
- Enable remote logging
- Restrict access to host
Credential Management¶
Lemur often contains credentials such as mutual TLS keys or API tokens that are used to communicate with third party resources and for encrypting stored secrets. Lemur comes with the ability to automatically encrypt these keys such that your keys not be in clear text.
The keys are located within lemur/keys and broken down by environment.
To utilize this ability use the following commands:
lemur lock
and
lemur unlock
If you choose to use this feature ensure that the keys are decrypted before Lemur starts as it will have trouble communicating with the database otherwise.
Entropy¶
Lemur generates private keys for the certificates it creates. This means that it is vitally important that Lemur has enough entropy to draw from. To generate private keys Lemur uses the python library Cryptography. In turn Cryptography uses OpenSSL bindings to generate keys just like you might from the OpenSSL command line. OpenSSL draws its initial entropy from system during startup and uses PRNGs to generate a stream of random bytes (as output by /dev/urandom) whenever it needs to do a cryptographic operation.
What does all this mean? Well in order for the keys that Lemur generates to be strong, the system needs to interact with the outside world. This is typically accomplished through the systems hardware (thermal, sound, video user-input, etc.) since the physical world is much more “random” than the computer world.
If you are running Lemur on its own server with its own hardware “bare metal” then the entropy of the system is typically “good enough” for generating keys. If however you are using a VM on shared hardware there is a potential that your initial seed data (data that was initially fed to the PRNG) is not very good. What’s more, VMs have been known to be unable to inject more entropy into the system once it has been started. This is because there is typically very little interaction with the server once it has been started.
The amount of effort you wish to expend ensuring that Lemur has good entropy to draw from is up to your specific risk tolerance and how Lemur is configured.
If you wish to generate more entropy for your system we would suggest you take a look at the following resources:
For additional information about OpenSSL entropy issues:
TLS/SSL¶
Nginx¶
Nginx is a very popular choice to serve a Python project:
- It’s fast.
- It’s lightweight.
- Configuration files are simple.
Nginx doesn’t run any Python process, it only serves requests from outside to the Python server.
Therefore, there are two steps:
- Run the Python process.
- Run Nginx.
You will benefit from having:
- the possibility to have several projects listening to the port 80;
- your web site processes won’t run with admin rights, even if –user doesn’t work on your OS;
- the ability to manage a Python process without touching Nginx or the other processes. It’s very handy for updates.
You must create a Nginx configuration file for Lemur. On GNU/Linux, they usually go into /etc/nginx/conf.d/. Name it lemur.conf.
proxy_pass just passes the external request to the Python process. The port must match the one used by the Lemur process of course.
You can make some adjustments to get a better user experience:; } }
This makes Nginx serve the favicon and static files which it is much better at than python.
It is highly recommended that you deploy TLS when deploying Lemur. This may be obvious given Lemur’s purpose but the sensitive nature of Lemur and what it controls makes this essential. This is a sample config for Lemur that also terminates TLS:; # modern; } }
Note
Some paths will have to be adjusted based on where you have choose to install Lemur.
Apache¶
An example apache config:
# HSTS (mod_headers is required) (15768000 seconds = 6 months) Header always set Strict-Transport-Security "max-age=15768000" ... # Set the lemur DocumentRoot to static/dist DocumentRoot /www/lemur/lemur/static/dist # Uncomment to force http 1.0 connections to proxy # SetEnv force-proxy-request-1.0 1 #Don't keep proxy connections alive SetEnv proxy-nokeepalive 1 # Only need to do reverse proxy ProxyRequests Off # Proxy requests to the api to the lemur service (and sanitize redirects from it) ProxyPass "/api" "" ProxyPassReverse "/api" "" </VirtualHost>
Also included in the configurations above are several best practices when it comes to deploying TLS. Things like enabling HSTS, disabling vulnerable ciphers are all good ideas when it comes to deploying Lemur into a production environment.
Note
This is a rather incomplete apache config for running Lemur (needs mod_wsgi etc.), if you have a working apache config please let us know!
See also
Mozilla SSL Configuration Generator
Supervisor¶
Supervisor is a very nice way to manage you Python processes. We won’t cover the setup (which is just apt-get install supervisor or pip install supervisor most of the time), but here is a quick overview on how to use it.
Create a configuration file named supervisor.ini:
[unix_http_server] file=/tmp/supervisor.sock [supervisorctl] serverurl=unix:///tmp/supervisor.sock [rpcinterface:supervisor] supervisor.rpcinterface_factory=supervisor.rpcinterface:make_main_rpcinterface [supervisord] logfile=/tmp/lemur.log logfile_maxbytes=50MB logfile_backups=2 loglevel=trace pidfile=/tmp/supervisord.pid nodaemon=false minfds=1024 minprocs=200 [program:lemur] command=python /path/to/lemur/manage.py manage.py start directory=/path/to/lemur/ environment=PYTHONPATH='/path/to/lemur/',LEMUR_CONF='/home/lemur/.lemur/lemur.conf.py' user=lemur autostart=true autorestart=true
The 4 first entries are just boiler plate to get you started, you can copy them verbatim.
The last one defines one (you can have many) process supervisor should manage.
It means it will run the command:
python manage.py start
In the directory, with the environment and the user you defined.
This command will be ran as a daemon, in the background.
autostart and autorestart just make it fire and forget: the site will always be running, even it crashes temporarily or if you restart the machine.
The first time you run supervisor, pass it the configuration file:
supervisord -c /path/to/supervisor.ini
Then you can manage the process by running:
supervisorctl -c /path/to/supervisor.ini
It will start a shell from which you can start/stop/restart the service.
You can read all errors that might occur from /tmp/lemur.log.
Periodic Tasks¶
Lemur contains a few tasks that are run and scheduled basis, currently the recommend way to run these tasks is to create a cron job that runs the commands.
There are currently three commands that could/should be run on a periodic basis:
- notify
- check_revoked
- sync
How often you run these commands is largely up to the user. notify and check_revoked are typically run at least once a day. sync is typically run every 15 minutes.
Example cron entries:
0 22 * * * lemuruser export LEMUR_CONF=/Users/me/.lemur/lemur.conf.py; /www/lemur/bin/lemur notify expirations */15 * * * * lemuruser export LEMUR_CONF=/Users/me/.lemur/lemur.conf.py; /www/lemur/bin/lemur source sync -s all 0 22 * * * lemuruser export LEMUR_CONF=/Users/me/.lemur/lemur.conf.py; /www/lemur/bin/lemur certificate check_revoked | http://lemur.readthedocs.io/en/latest/production/index.html | 2017-11-17T21:25:23 | CC-MAIN-2017-47 | 1510934803944.17 | [] | lemur.readthedocs.io |
Overview
Accepting payments with Straal is safe and secure – both for your business and your customers. Thanks to the highest-class security measures, you can rest assured that your customers' payment data will be fully secure throughout the entire process.
Straal offers a set of solutions limiting fraud risk – automatic control tools and 3-D Secure, an option of securing transactions by a one-off code sent by SMS to the customer from their issuing bank.
In this section you will learn about:
- PCI DSS
- 3-D Secure
- how Straal protects you from fraud | https://docs.straal.com/security/ | 2020-10-19T21:49:03 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.straal.com |
Use the Policy Management screen to manage security settings
in a list of policies and apply policies to specified targets (groups or endpoints).
Task
Description
Add new policies
Click Add to create a new policy, configure a set of security
settings, and assign the policy to specified groups or endpoints.
For more information, see Configuring Policy
View or change policy settings
Click a name in the Policy column to open the Configure
Policy screen.
Copy policy settings
Select an existing policy and click Copy to create a new policy
with the same settings.
Delete policies
Select policies and click Delete.
Reorder the policy list
Drag a policy to a different row in the list. | https://docs.trendmicro.com/en-us/smb/worry-free-business-security-services-67-server-help/policy-management/policy-management_001/policy-management-in_002.aspx | 2020-10-19T21:47:59 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.trendmicro.com |
-
-
for: following table.
File: You can upload a configuration file from your local machine and create jobs.
Once a job is created, you can choose to run the job immediately or schedule the job to be run..
Click Next, and then on the Select Instances tab, click Add Instances. Select the instances on which you want to run the job, and then click OK.
Click Next, and then Citrix ADM run run)
Click Next, on the Job Preview tab, you can evaluate and verify the commands to be run as a job.
Click Next, on the Execute tab, set the following conditions:
On Command Failure: What to do if a command fails: ignore the errors and continue the job, or stop further execution of the job. Choose an action from the drop-down list.
Execution Mode: Run the job immediately, or schedule execution for a later time. If you schedule execution for a later time, you must specify the execution frequency settings for the job. Choose the schedule you want the job to follow from the Execution Frequency drop-down list.
Under Execution Settings, select to run the job sequentially (one after the other), or in parallel (at the same time).
To have a job execution report emailed to a list of recipients, select the Email check box. | https://docs.citrix.com/en-us/citrix-application-delivery-management-service/networks/configuration-jobs/sd-wan-wo-instances.html | 2020-10-19T21:53:08 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.citrix.com |
hazardlib¶
hazardlib (the Openquake Hazard Library) is open-source software for performing seismic hazard analysis.
What is hazardlib?¶
hazardlib includes modules for modeling seismic sources (point, area and fault), earthquake ruptures, temporal (e.g. Poissonian) and magnitude occurrence models (e.g. Gutenberg-Richter), magnitude/area scaling relationships, ground motion and intensity prediction equations (i.e. GMPEs and IPEs). Eventually it will offer a number of calculators for hazard curves, stochastic event sets, ground motion fields and disaggregation histograms.
hazardlib aims at becoming an open and comprehensive tool for seismic hazard analysis. The GEM Foundation () supports the development of the library by adding the most recent methodologies adopted by the seismological/seismic hazard communities. Comments, suggestions and criticisms from the community are always very welcome.
Development and support¶
hazardlib is being actively developed by GEM foundation as a part of OpenQuake project (though it doesn’t mean hazardlib depends on OpenQuake). The OpenQuake development infrastructure is used for developing hazardlib: the public repository is available on github:. A mailing list is available as well:. You can also ask for support on IRC channel #openquake on freenode. | https://docs.openquake.org/oq-hazardlib/stable/hazardlib.html | 2020-10-19T23:07:01 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.openquake.org |
A transaction is the simultaneous purchase of one or more products. The Frosmo Platform registers each transaction, irrespective of the number of products purchased, as a single conversion.
What is transaction tracking?
Transaction tracking is the process of monitoring visitors for actions that qualify as transactions and collecting the data about those actions (transaction data). Transaction tracking also involves counting transactions as conversions and attributing those conversions to modifications, which the Frosmo Platform does automatically when it receives transaction data from a site. The data is stored in the Frosmo back end.
Transaction tracking allows you to monitor the revenue generated by your site and measure Frosmo's impact on that revenue. Transaction tracking is also a prerequisite for implementing features that rely on transaction data, such as generating product recommendations and segmenting visitors based on the products they have purchased, which in turn feed into revenue generation.
Figure: Transaction tracking in the Frosmo Platform (click to enlarge)
Transaction tracking generates transaction and modification statistics, which you can view in the Frosmo Control Panel.
For more information about conversions and conversion attribution, see the conversions user guide.
If you want to track a product conversions that do not involve a purchase, use conversion tracking.
Tracking transactions with the data layer
Tracking transactions with the data layer means triggering a transaction event whenever a visitor successfully completes an action that qualifies as a transaction. The data you pass in the transaction event defines the transaction.
To use the data layer on a site, the data layer module must be enabled for the site.
You can trigger transaction events from:
- Modifications (either from custom content or, if you're using a template, from the template content)
- Shared code
- Page code (meaning directly from your site source code)
Figure: Tracking transactions by triggering a transaction event from shared code (click to enlarge)
Triggering transaction events
To trigger a transaction event, call the
dataLayer.push() function with a transaction object containing the transaction data:
dataLayer.push({ transactionProducts: [{ id: 'string', name: 'string', price: 0, sku: 'string', /* Optional */ quantity: 0 }], /* Optional */ transactionId: 'string', transactionTotal: 0 });
Transaction object
The transaction object contains the data of a transaction event.
Table: Transaction object properties
Table: Transaction product object properties
Transaction }] });
Testing transaction tracking
To test that transactions are correctly tracked with the data layer:
- Go to the site.
- Enable console logging for Frosmo Core.
Go to a page where transactions are tracked. If transaction events are successfully triggered with the data layer, the browser console displays the following messages for each event:
EASY [events] info:: conversion(contains the transaction data parsed from the transaction object)
EASY [events] info:: product.purchase(contains the ID and price for a single purchased product item)
EASY [events] info:: dataLayer(contains the transaction object passed to the data layer)
The following figure shows an example of the transaction event messages.
Transaction events log a dedicated
product.purchasemessage for each individual purchased item. For example:
- If a transaction contains three copies of one product, the transaction event logs three
product.purchasemessages.
- If a transaction contains three copies of one product and two copies of another product, the transaction event logs five
product.purchasemessages.
If you want more details on a data layer call, select the Network tab in the developer tools, and check the
transactionrequest to the Optimizer API. If the status is
200, the request completed successfully. | https://docs.frosmo.com/pages/viewpage.action?pageId=42797910 | 2020-10-19T21:31:39 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.frosmo.com |
CompareStringW function (stringapiset.h)
Compares two character strings, for a locale specified by identifier.
Syntax
int CompareStringW( LCID Locale, DWORD dwCmpFlags, _In_NLS_string_(cchCount1)PCNZWCH lpString1, int cchCount1, _In_NLS_string_(cchCount2)PCNZWCH lpString2, int cchCount2 );
Parameters
Locale
Locale identifier of the locale used for the comparison. You can use the MAKELCID macro to create a locale identifier or use one of the following predefined values.
- LOCALE_CUSTOM_DEFAULT
- LOCALE_CUSTOM_UI_DEFAULT
- LOCALE_CUSTOM_UNSPECIFIED
- LOCALE_INVARIANT
- LOCALE_SYSTEM_DEFAULT
- LOCALE_USER_DEFAULT
dwCmpFlags
Flags that indicate how the function compares the two strings. For detailed definitions, see the dwCmpFlags parameter of CompareStringEx.
lpString1
Pointer to the first string to compare.
cchCount1
Length of the string indicated by lpString1, excluding the terminating null character. This value represents bytes for the ANSI version of the function and wide characters for the Unicode. This value represents bytes for the ANSI version of the function and wide characters for the Unicode version. The application can supply a negative value if the string is null-terminated. In this case, the function determines the length automatically.
Return value
Returns the values described for CompareStringEx.
Remarks
See Remarks for CompareStringEx.
If your application is calling the ANSI version of CompareString, the function converts parameters via the default code page of the supplied locale. Thus, an application can never use CompareString to handle UTF-8 text.
Normally, for case-insensitive comparisons, CompareString maps the lowercase "i" to the uppercase "I", even when the locale is Turkish or Azerbaijani. The NORM_LINGUISTIC_CASING flag overrides this behavior for Turkish or Azerbaijani. If this flag is specified in conjunction with Turkish or Azerbaijani, LATIN SMALL LETTER DOTLESS I (U+0131) is the lowercase form of LATIN CAPITAL LETTER I (U+0049) and LATIN SMALL LETTER I (U+0069) is the lowercase form of LATIN CAPITAL LETTER I WITH DOT ABOVE (U+0130).
Starting with Windows 8: The ANSI version of the function is declared in Winnls.h, and the Unicode version is declared in Stringapiset.h. Before Windows 8, both versions were declared in Winnls.h.
Requirements
See also
Handling Sorting in Your Applications
National Language Support
National Language Support Functions
Security Considerations: International Features
Using Unicode Normalization to Represent Strings | https://docs.microsoft.com/en-us/windows/win32/api/stringapiset/nf-stringapiset-comparestringw | 2020-10-19T23:01:38 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.microsoft.com |
Message-ID: <855984485.146487.1603143658053.JavaMail.confluence@docs1.parasoft.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_146486_2035382231.1603143658051" ------=_Part_146486_2035382231.1603143658051 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
In this section:
You can monitor and collect coverage data from .NET managed code during = manual or automated functional tests performed on a running web application= that is deployed on IIS server. You can also send coverage data and test r= esults to DTP. The application coverage information can be displayed in the= DTP Coverage Explorer (see the "Coverage Explorer" chapter in the DTP user= manual), which provides insights about how well the application is tested,= as well as the quality of your tests.
.NET Core Web Applications
dotTEST can collect coverage for .NET Core web applications that are dep=
loyed on IIS server. Alternatively, you can use the
coverage.exe tool shipped with dotTEST; see Application Coverage for Stan=
dalone Applications for details.
The following components are required for collecting coverage:
dotTEST ships with a component called the coverage agent. The coverage a= gent is attached to the application under test (AUT) and monitors the code = being executed as the AUT runs. When the coverage agent is attached to the = AUT, a REST API is exposed that enables you to mark the beginning and end o= f each test and test session. During test execution, interactions with= the coverage agent and AUT are written to a dynamic coverage data file, wh= ich contains markers that specify which lines of code were touched. <= /p>
The steps that follow:
Test results are also sent to DTP from the tool that executes the tests = (i.e., SOAtest, tests executed by dotTEST, manual tests, etc.) in a re= port.xml file. If the build IDs for the coverage.xml file and the report ma= tch, DTP is able to correlate the data and display the coverage information= .
If you use a source control system, ensure that your source control sett= ings are properly configured; see Source Control Settings.
The following steps are required to prepare the application under t= est (AUT):
Run the following test configuration on the solution:
dottest= cli.exe -config "builtin://Collect Static Coverage" -solution SOLUTION_PATH==20
The dottestcli console output will indicate where the static coverag= e data is saved:
Saving = static coverage information into: 'C:\Users\[USER]\Documents\Parasoft\dotTE= ST\Coverage\Static\[FILE].data=20
By default, coverage is measured for the entire web application. You can= customize the scope of coverage by adding the following switches when coll= ecting static coverage to measure specific parts of the application (see Configuring the Test Scope, for usage information): &nbs= p;
dottest= cli.exe -config "builtin://Collect Static Coverage"=20 -solution "C:\Devel\FooSolution\FooSolution.sln"=20 -resource "FooSolution/QuxProject"=20 -include "C:\Devel\FooSolution\src\QuxProject\**\*.cs"=20 -exclude "C:\Devel\FooSolution\src\QuxProject\**\tests\**\*.cs"=20
The
-resource switch points to a path inside the solution, =
while the
-include and
-exclude switches should b=
e paths in the file system.
The scope information is stored in a scope configuration file, which can= be provided to the IIS manager tool during web server configuration (see <= a href=3D"#ApplicationCoverageforWebApplications-AttachingtheCoverageAgentt= otheAUT">Attaching the Coverage Agent to the AUT). The output from the = console will indicate the location of the scope configuration file:
Saving = static coverage scope configuration into: 'C:\Users\[USER]\Documents\Paraso= ft\dotTEST\Coverage\Static\scope.instrumentation.txt'=20
It is not possible to use the application coverage scope file for web pr= ojects that are compiled on IIS. This is because the target assemblies of I= IS compilations are named unpredictably. Scope files can be used safely whe= n the assembly name loaded by IIS can be predetermined before coverage coll= ection starts.
Invoke the dotTEST IIS Manager tool on this machine to enable runtim= e coverage collection inside IIS:
dottest= _iismanager.exe=20
You may need to configure the dotTEST IIS Manager with additional = options, see IIS Manager Options.
dottest_iismanager initializes the en= vironment for the web server (IIS) and behaves like a service, enabling you= to execute tests and collect coverage. The service is ready and waiting fo= r commands as long as the following message is printed to the output: = ;
Write '= exit' and hit Enter to close dottest_iismanager=20
A test session and test can be started even if the tested website or app= lication has not been loaded yet.
Go to the following address to check the status of the coverage agen=
t:<=
br>You should receive the following response:
{"sessi= on":null,"test":null}=20
You can collect coverage information for multiple users that are simulta=
neously accessing the same web application server. This requires launching =
the dotTEST IIS Manager with the
-multiuser switch:
dottest= _iismanager.exe -multiuser=20
See the Coverage Agent Manager (CAM) section of the DTP documentation fo= r details.
By default, IIS application pool processes are recycled after 20 minutes= of idle time, which can have negative consequences on a test session. You = can prevent this behavior by changing the default value so that people work= ing with the application do not experience unexpected stops and restarts du= ring a test session.
Test Configuration and Execution with SOAtest
You can use SOAtest to run functional tests (refer the Application Cover=
age chapter of the SOAtest documentation to set up the test configuration),=
as well as execute manual tests. At the end of the test session, coverage =
will be saved in
runtime_coverage_[timestamp].data files in th=
e directory specified in SOAtest. This information will eventually be merge=
d with the static coverage data to create a coverage.xml file and uploaded =
to DTP.
For tests executed by SOAtest, the SOAtest XML report will need to be up= loaded to DTP. See the "Uploading Rest Results to DTP" section in the Appli= cation Coverage topic in the SOAtest documentation for details.
report.coverage.image= s- Specifies a set of tags that are used to create cover= age images in DTP. A coverage image is a unique identifier for aggregating = coverage data from runs with the same build ID. DTP supports up to three co= verage images per report.
session.tag- Specifi= es a unique identifier for the test run and is used to distinguish differen= t runs on the same build.
build.id- Specifies = a build identifier used to label results. It may be unique for each build, = but it may also label several test sessions executed during a specified bui= ld.
Copy the runtime coverage and static coverage files to the same mach=
ine and run
dottestcli with the following switches:<=
/p>
-runtimeCoverage: Specifies the path to runtime coverage t= hat you download with CAM (see Coverage Agent Manager (CAM) section of= the DTP documentation for details). You can provide a path to an individua= l .data file with coverage information from one testing session, or a path = to a folder that contains many .data files from multiple testing sessions.<= /li>
-staticCoverage: Specifies the path to the static coverage= file (see Generating the Static Coverage File).
dottest= cli.exe -runtimeCoverage [path] -report [path] -publish -settings [path] -o= ut [path] -staticCoverage [path]=20
You can stop the process of collecting dynamic coverage data in one of t= he following ways:
Write
exit in the open console when the following =
message will be printed to the output to stop dottest_iismanager:
Write '= exit' and hit Enter to close dottest_iismanager=20
Send a request to the service by entering the following URL in the b=
rowser:
Stop dottest_iismanager only when all test sessions are finished. Applic= ation coverage will no longer be collected when the service stops, so it is= important that dottest_iismanager runs continously while performing tests = to collect coverage.
If any errors occur when dottest_iismanager exits, which prevent the cle=
an-up of the Web Server environment, then execute dottest_iismanager with t=
he
-stop option to bring back the original Web Server environm=
ent and settings:
dottest= _iismanager.exe -stop=20
You can use the Coverage Explorer in DTP to review the application cover= age achieved during test execution. See the DTP documentation for details o= n viewing coverage information. | https://docs.parasoft.com/exportword?pageId=38643244 | 2020-10-19T21:40:58 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.parasoft.com |
The Mesh Renderer takes the geometry from the Mesh Filter and renders it at the position defined by the GameObject’s Transform component.
This page contains the following sections:
The Materials section in the Mesh Renderer Inspector lists all the Materials that the Mesh Renderer is using. Meshes imported from 3D modelling software can use multiple Materials, and each sub-Mesh uses one Material from the list.
If a Mesh contains more Materials than sub-Meshes, Unity renders the last sub-Mesh with each of the remaining Materials, one on top of the next. This allows you to set up multi-pass rendering on that sub-Mesh. However, this can impact the performance at run time. Fully opaque Materials overwrite the previous layers, which causes a decrease in performance with no advantage.
The Lighting section contains properties for how this Mesh Renderer interacts with lighting in Unity.
This section is visible only if only if Receive Global Illumination is set to Lightmaps.
When you’ve baked your lighting data (menu: Window > Rendering > Lighting > Generate Lighting ), this section also shows the lightmaps in the Scene that this Renderer uses use. Here, you can read relevant information about the Baked Lightmap and the Realtime Lightmap, if applicable.
The Probes section contains properties relating to Light Probes and Reflection Probes.
The Additional Settings contain additional properties. | https://docs.unity3d.com/es/2020.1/Manual/class-MeshRenderer.html | 2020-10-19T22:42:35 | CC-MAIN-2020-45 | 1603107866404.1 | [] | docs.unity3d.com |
content you put into the repository and turns it into information you can use more effectively.
This document goes into detail about JBoss DNA and its capabilities, features, architecture, components, extension points, security, configuration, and testing.., the majority of metadata is either found in or managed by other systems: databases, applications, file systems, source code management systems, services, and content management systems, and even other repositories. We can't pull the information out and duplicate it, because then we risk having multiple copies that are out-of-sync. But we do want to access it through a homogenous API, since that will make our lives significantly easier.
The answer to this apparent dichotomy is federation. We can connect to these back-end systems to dynamically access the content and project it into a single, unified repository. We can alsoBoss DNA project is building unified metadata repository system that is compliant with JCR. is to create a REST-ful API to allow the JCR content to be accessed easily by other applications written in other languages.
The roadmap for JBoss DNA is managed in the project's JIRA instance . The roadmap shows the different tasks, requirements, issues and other activities that have been targeted to each of the upcoming releases. (The roadmap report always shows the next three releases.)
By convention, JIRA issues not immediately targeted to a release will be reviewed periodically to determine the appropriate release where they can be targeted. Any issue that is reviewed and that does not fit in a known release will be targeted to the Future Releases bucket.
At the start of a release, the project team reviews the roadmap, identifies the goals for the release, and targets (or retargets) the issues appropriately.
Rather than use a single formal development methodology, the consists of the following modules:
dna-jcr
provides the JBoss DNA implementation of the JCR API, which relies upon a repository connector, such as the
Federation Connector (see
dna-connector-federation
).
dna-integration-tests provides a home for all of the integration tests that involve more components that just unit tests. Integration tests are often more complicated, take longer, and involve testing the integration and functionality of many components (whereas unit tests focus on testing a single class or component and may use stubs or mock objects for other components).
The following modules are optional extensions that may be used selectively and as needed (and are located in the source
under the
extensions/
directory):
dna-maven-classloader
is a small library that provides a
ClassLoaderFactory
implementation that can create
java.lang.ClassLoader
instances capable of loading classes given a Maven Repository and a list of Maven coordinates. The Maven Repository
can be managed within a JCR repository.
dna-connector-federation is a DNA repository connector that federates, integrates and caches information from multiple sources (via other repository connectors).
dna-connector-inmemory is a simple DNA repository connector that manages content within memory. This can be used as a simple cache or as a transient repository.
dna-connector-jbosscache is a DNA repository connector that manages content within a JBoss Cache instance. JBoss Cache is a powerful cache implementation that can serve as a distributed cache and that can persist information. The cache instance can be found via JNDI or created and managed by the connector.
dna-sequencer-zip is a DNA sequencer that extracts from ZIP archives the files (with content) and folders.-java is a DNA sequencer that extracts the package, class/type, member, documentation, annotations, and other information from Java source files.-cnd is a DNA sequencer that extracts JCR node definitions from JCR Compact Node Definition (CND) files.
dna-mimetype-detector-aperture is a DNA MIME type detector that uses the Aperture library to determine the best MIME type from the filename and file contents..
Finally, there is a module that represents the whole JBoss DNA project:
dna is the parent project that aggregates all of the other projects and that contains some asset files to create the necessary Maven artifacts during a build.
Each of these modules is a Maven project with a group ID of
org.jboss.dna
. All of these projects correspond to artifacts in the
JBoss Maven 2 Repository
. | http://docs.jboss.org/modeshape/0.2/manuals/reference/html/introduction.html | 2013-05-18T20:49:06 | CC-MAIN-2013-20 | 1368696382851 | [] | docs.jboss.org |
6.1 Creating Units
Each import or export sig-spec ultimately refers to a sig-id, which is an identifier that is bound to a signature by define-signature.
In a specific import or export position, the set of identifiers bound or required by a particular sig-id can be adjusted in a few ways:
(prefix id sig-spec) as an import binds the same as sig-spec, except that each binding is prefixed with id. As an export, this form causes definitions using the id prefix to satisfy the exports required by sig-spec.
(rename sig-spec (id id) ...) as an import binds the same as sig-spec, except that the first id is used for the binding instead of the second id (where sig-spec by itself must imply a binding that is bound-identifier=? to second id). As an export, this form causes a definition for the first id to satisfy the export named by the second id in sig-spec.
(only sig-spec id ...) as an import binds the same as sig-spec, but restricted to just the listed ids (where sig-spec by itself must imply a binding that is bound-identifier=? to each id). This form is not allowed for an export.
(except sig-spec id ...) as an import binds the same as sig-spec, but excluding all listed ids (where sig-spec by itself must imply a binding that is bound-identifier=? to each id). This form is not allowed for an export.
As suggested by the grammar, these adjustments to a signature can be nested arbitrarily.
A unit’s declared imports are matched with actual supplied imports by signature. That is, the order in which imports are supplied to a unit when linking is irrelevant; all that matters is the signature implemented by each supplied import. One actual import must be provided for each declared import. Similarly, when a unit implements multiple signatures, the order of the export signatures does not matter.
To support multiple imports or exports for the same signature, an import or export can be tagged using the form (tag id sig-spec). When an import declaration of a unit is tagged, then one actual import must be given the same tag (with the same signature) when the unit is linked. Similarly, when an export declaration is tagged for a unit, then references to that particular export must explicitly use the tag.
A unit is prohibited syntactically from importing two signatures that are not distinct, unless they have different tags; two signatures are distinct only if they share no ancestor through extends. The same syntactic constraint applies to exported signatures. In addition, a unit is prohibited syntactically from importing the same identifier twice (after renaming and other transformations on a sig-spec), exporting the same identifier twice (again, after renaming), or exporting an identifier that is imported.
When units are linked, the bodies of the linked units are executed in an order that is specified at the linking site. An optional (init-depend tagged-sig-id ...) declaration constrains the allowed orders of linking by specifying that the current unit must be initialized after the unit that supplies the corresponding import. Each tagged-sig-id in an init-depend declaration must have a corresponding import in the import clause.
Each id in a signature declaration means that a unit implementing the signature must supply a variable definition for the id. That is, id is available for use in units importing the signature, and id must be defined by units exporting the signature.
Each define-syntaxes form in a signature declaration introduces a macro that is available for use in any unit that imports the signature. Free variables in the definition’s expr refer to other identifiers in the signature first, or the context of the define-signature form if the signature does not include the identifier.
Each define-values form in a signature declaration introduces code that effectively prefixes every unit that imports the signature. Free variables in the definition’s expr are treated the same as for define-syntaxes.
Each define-values-for-export form in a signature declaration introduces code that effectively suffixes every unit that exports the signature. Free variables in the definition’s expr are treated the same as for define-syntaxes.
Each are treated the same as for define-syntaxes.
Each (open sig-spec) adds to the signature everything specified by sig-spec.
Each (struct id (field ...) struct-option ...) adds all of the identifiers that would be bound by (struct id (field ...) field-option ...), where the extra option #:omit-constructor omits the constructor identifier.
Each (sig-form-id . datum) extends the signature in a way that is defined by sig-form-id, which must be bound by define-signature-form. One such binding is for struct/ctc.
When a define-signature form includes an extends clause, then the define signature automatically includes everything in the extended signature. Furthermore, any implementation of the new signature can be used as an implementation of the extended signature. | http://docs.racket-lang.org/reference/creatingunits.html | 2013-05-18T20:57:19 | CC-MAIN-2013-20 | 1368696382851 | [] | docs.racket-lang.org |
Installation and Configuration Guide
Local Navigation
Install the BlackBerry Enterprise Server software
- Log in to the computer using the Windows® account that you created and that has the proper permissions. This account will run the services for the BlackBerry Enterprise Server.
- Stop the IBM® Lotus® Domino® server.
- Change the startup type of the IBM Lotus Domino server to manual.
- On the BlackBerry Enterprise Server installation media, double-click setup.exe.
- In the Setup type dialog box, select one of the following options:
- If this installation process is the first installation of BlackBerry Enterprise Server software in a BlackBerry Domain, select I would like the installation process to create a BlackBerry Configuration Database.
- For all other installations of the BlackBerry Enterprise Server software, select I would like the installation process to use an existing BlackBerry Configuration Database .
- In the Setup options dialog box, consider the following information:
- You can select or remove BlackBerry Enterprise Server components from the Additional Components list.
- To install the BlackBerry Administration Service only during the first installation, click Remote component. In the Additional components list, expand BlackBerry administration. Click BlackBerry Adminstration Service.
- To permit administrators to log in to the BlackBerry Administration Service and BlackBerry Monitoring Service using their Microsoft® Active Directory® credentials, in the BlackBerry administration list, click Use Active Directory authentication.
- In the Accounts and Folders dialog box, in the Name field, type the name of the BlackBerry Enterprise Server that you want the BlackBerry Administration Service to display.
- When the setup application prompts you to restart the computer, click Yes.
- Log in to the computer using the same account that you used in step 1.
- In the Advanced database options dialog box, consider the following information:
- If you want to configure database mirroring,.
- If you configured the database server to use static ports, you must clear the Use dynamic ports check box. If the static port number is not 1433, type the port number in the Port field.
- In the Application extensibility information dialog box, consider the following information:
- You can type a FQDN that corresponds to an DNS record in the DNS server that maps the FQDN into LDAP settings dialog box, consider the following information:
- You can type the URL of the LDAP server that hosts the BlackBerry device users using the following format: ldap://<computer_name>:<port>; where <computer_name> is the DNS name of the LDAP server, and <port> is the port number that the LDAP server listens for connections on (by default, port 389).
- You can type the distinguished name of the search base URL for the area of the directory tree that contains the BlackBerry device users.
- You can type the name and password for the administrator account that has permissions to log in to and search the LDAP server. You can type the name for the administrator account as the login name, also known as the security account manager name (for example, besadmin).
-. You can use the web addresses to log in to the BlackBerry Enterprise Server components that you installed.
- If required, add the name of the BlackBerry MDS Integration Service pool to the DNS server and change the name of the computer.
-.
- If the setup application installed Microsoft SQL Server Express 2003, install the hotfix for Microsoft Security Bulletin MS09-004 (for more information, visit to read article KB960082).. | http://docs.blackberry.com/en/admin/deliverables/7312/Install_the_BES_software_350722_11.jsp | 2013-05-18T20:31:21 | CC-MAIN-2013-20 | 1368696382851 | [] | docs.blackberry.com |
API17:JInstallerComponent:: build /> | http://docs.joomla.org/index.php?title=JInstallerComponent::_buildAdminMenus/11.1&diff=57138&oldid=50057 | 2013-05-18T20:48:44 | CC-MAIN-2013-20 | 1368696382851 | [] | docs.joomla.org |
View tree
Close tree
|
Preferences
|
|
Feedback
|
Legislature home
|
Table of contents
Search
Up
Up (2d) 432, 173 NW (2d) 175.
An act validating existing sewerage districts previously held to be unconstitutionally organized is within the power of the legislature. Madison Metropolitan Sewerage Dist. v. Stein, 47 W (2d) 349, 177 NW (2d) 131.
The power given vocational district boards to levy taxes does not violate this section. The manner of appointing board members is constitutional. West Milwaukee v. Area Bd. Vocational, T. & A. Ed. 51 W (2d) 356, 187 NW (2d) 387.
One legislature cannot dictate action by a future legislature or a future legislative committee. State ex rel. Warren v. Nusbaum, 59 W (2d) 391, 208 NW (2d) 780.
Delegation of legislative power under 66.016 (2) (d) is constitutional. Westring v. James, 71 W (2d) 462, 238 NW (2d) 695.
Legislature may constitutionally prescribe criminal penalty for violation of administrative rule. State v. Courtney, 74 W (2d) 705, 247 NW (2d) 714.
Provision of 144.07 (1m), which voids DNR sewerage connection order if electors in affected town area reject annexation to city ordered to extend sewerage service, represents valid legislative balancing and accommodation of 2 statewide concerns: urban development and pollution control. City of Beloit v. Kallas, 76 W (2d) 61, 250 NW (2d) 342.
Section 147.035 (2) does not unlawfully delegate legislative power. Niagara of Wis. Paper Corp. v. DNR, 84 W (2d) 32, 268 NW (2d) 153 (1978).
Sections 46.03 (18) and 46.10 do not constitute an unlawful delegation of legislative power. In Matter of Guardianship of Klisurich, 98 W (2d) 274, 296 NW (2d) 742 (1980).
Mediation - arbitration under 111.70 (4) (cm) is constitutional delegation of legislative authority. Milwaukee County v. District Council 48, 109 W (2d) 14, 325 NW (2d) 350 (Ct. App. 1982).
Court will invalidate legislation only for constitutional violations. State ex rel. La Follette v. Stitt, 114 W (2d) 358, 338 NW (2d) 684 (1983).
Reference in 102.61 to general federal vocational rehabilitation law as amended necessarily references current federal law where act named in 102.61 had been repealed and the law rewritten in another act. Because reference is stated as part of contingency, it does not constitute unlawful delegation of legislative authority to U.S. Congress. Dane County Hospital & Home v. LIRC, 125 W (2d) 308, 371 NW (2d) 815 (Ct. App. 1985).
Proposed amendments to bills creating variable obscenity laws, which.
In enacting the Natural Gas Act (15 U.S.C. s. 717 et seq.) Congress did not intend to regulate only interstate pipeline companies. Rather the legislative history indicates a congressional intent to give the Federal Power Commission jurisdiction over the rates of all wholesalers of natural gas transported in interstate commerce, whether by a pipeline company or not and whether occurring before, during, or after transmission by an interstate pipeline company. Phillips Petroleum Co. v. Wisconsin, 347 US 672.
]
Recent.
Requirement of 8.15 (4) (b), 1975 stats., that candidate reside in district at time of filing nomination papers unconstitutionally adds to candidacy qualifications required by Art. IV, sec. 6. 65 Atty. Gen. 159. US such office were raised during his legislative term. If so elected, he is limited by 13.04 (1) to the emoluments of the office prior to such increase. A legislator is not eligible, however, for appointment to an office created during his term or to an office the emoluments of which appointive office were raised during his.
Privilege under this section can be invoked by legislator only if legislator is subpoenaed, not if aide is subpoenaed. State v. Beno, 116 W (2d) 122, 341 NW (2d) 668 (1984).
IV,16
Privilege in debate. Section 16.
No member of the legislature shall be liable in any civil action, or criminal prosecution whatever, for words spoken in debate.
Legislator invoked privilege under this section to immunize aide from subpoena to testify as to investigation conducted by aide. State v. Beno, 116 W (2d) 122, 341 NW (2d) 668 (1984).
In federal criminal prosecution against state legislator there is no legislative privilege barring introduction of evidence of legislator's legislative acts. United States v. Gillock, 445 US (2d) 64, 303 NW (2d) 626 (1981).
Specific prison siting provision in budget act did not violate this section. Test for distinguishing private or local law established. Milwaukee Brewers v. DH&SS, 130 W (2d) 79, 387 NW (2d) 254 (1986).
Challenged legislation, although general on its face, violated this section because classification employed isn't based on any substantial distinction between classes employed nor is it germane to purposes of the legislation. Brookfield v. Milw. Sewerage, 144 W (2d) 896, 426 NW w. Sewerage Dist., 171 W (2d) 400, 491 NW (2d) 484 (1992).
Two prong analysis for determining violations of this section discussed. City of Oak Creek v. DNR, 185 W (2d) 424, 518 NW employe creation of a different system of town government. Thompson v. Kenosha County, 64 W (2d) 673, 221 NW (2d) 845.
Only enactments which unnecessarily interfere with the system's uniformity in a material respect are invalidated by this section. Classifications based upon population have generally been upheld. Section 60.19 (1) (c) does not violate uniformity clause. State ex rel. Wolf v. Town of Lisbon, 75 W (2d) 152, 248 NW (2d) 450.
County has standing to challenge validity of rule not adopted in conformity with 227.02 through 227.025, 1985 stats. [now 227.16 - 227.21]. Dane County v. H&SS Dept. 79 W (2d) 323, 255 NW (2d) 539. (2d) 898, 569 NW (2d) 784 (Ct. App. 1997).
County executive's partial-veto power is similar to governor's power. 73 Atty. Gen. 92.
County board may not amend resolution, ordinance or part thereof vetoed by county executive, but can pass separate substitute for submission to executive. Board has duty promptly to reconsider vetoed resolutions, ordinances or parts thereof. 74 Atty. Gen. 73.
IV,24
Gambling. Section 24.
[
As amended April 1965, April 1973, April 1977, April 1987 and April 1993
] as provided by law.
]
The state lottery board may conduct any lottery game which complies with ticket language in constitution and ch. 565. Term "lottery" in constitution and statutes does not include any other forms of betting, playing or operation of gambling machines and devices and other forms of gambling defined in ch. 945. Legislature can statutorily authorize other non-lottery gambling including casino-type games. 79 Atty. Gen. 14.
Under which would propose to license and regulate certain "amusement devices" which are gambling machines would authorize "gambling" in violation of Art. IV, section 24. OAG 2-96.
State's interest in preventing organized crime infiltration of tribal bingo enterprise does not justify state regulation in light of compelling federal and tribal interest supporting it. California v. Cabazon Band of Indians, 480 US.
Legality of appointing nominee to board of regents when such person is a major stockholder in a printing company that is under contract to the state discussed. 60 Atty. Gen. 172.
IV,26
Extra compensation; salary change. Section 26.
Down
Down
/1997/related/wiscon/_8
false
wisconsinconstitution
/1997/related/wiscon/_8/_215/_10
wisconsinconstitution/1997/IV,9
wisconsinconstitution/1997/IV,9
section
true
PDF view
View toggle
Cross references for section
View sections affected
References to this
Reference lines
Clear highlighting
Permanent link here
Permanent link with tree | http://docs.legis.wisconsin.gov/1997/related/wiscon/_8/_215/_10 | 2013-05-18T20:22:02 | CC-MAIN-2013-20 | 1368696382851 | [] | docs.legis.wisconsin.gov |
Timeline
01/24/10:
- 20:44 Ticket #2330 (recording from usb headset and playing back audio at the same time prints ...) created by
- Steps to reproduce: 0) connect usb headset to openmoko 1) alsactl restore …
- 14:45 Ticket #2329 (arecord -D plughw:0,0 => oops BUG? (substream->stream != ...) created by
- Steps to reproduce: 1) arecord -D plughw:1,0 -t raw -r 48000 -f S16_LE -c …
01/21/10:
- 11:11 Changeset [5794] by
- Specified the KiCad? revision our build process is tested for. (Reported by …
01/18/10:
- 23:57 Changeset [5793] by
- gtk_init calls setlocale(..., ""), which can upset all uses of scanf, …
01/15/10:
- 09:54 Ticket #2328 (touschreen sometimes stops generating events) created by
- Steps to reproduce: 1) hexdump -C /dev/input/event1 2) touch the …
01/13/10:
- 09:24 Ticket #2327 (wifi sometimes fails to start with "ar6000_activate: Failed to activate ...) created by
- Steps to reproduce: 1) echo s3c2440-sdi > …
01/12/10:
- 03:18 Changeset [5792] by
- When clicking on an instance, fped used to select the currenly active …
01/11/10:
- 18:39 Changeset [5791] by
- some small components
01/10/10:
- 23:11 Changeset [5790] by
- correct to be PADS view not PINS view.
- 15:59 Changeset [5789] by
- orientation markings
- 14:09 Changeset [5788] by
- When editing a variable or value, the actual value is now shown. The …
- 12:04 Changeset [5787] by
- add bf2520
- 11:35 Changeset [5786] by
- measurement positions
- 11:08 Changeset [5785] by
- improvements based on feedback from Werner
- 02:15 Changeset [5784] by
- extra measurements
- 01:58 Changeset [5783] by
- reworking or pcf50633
01/07/10:
- 21:30 Changeset [5782] by
- - zmap/main.c (process_file): added skipping of comment lines - …
- 20:57 Changeset [5781] by
- - gp2rml/gp2rml (output_paths): added distance and run time estimate
01/06/10:
- 21:23 Changeset [5780] by
- Added gp2rml, gnuplot to RML converter.
- 13:07 Changeset [5779] by
- - align/align.c (process_file): skip comment-only lines - align/align.c: …
01/05/10:
- 13:46 Ticket #2325 (adventures in building Qi on Debian armel) closed by
- fixed: Thanks for reporting. Fixed in HEAD. I did it a little bit differently, …
- 11:44 Ticket #2326 (BUG: sleeping function called from invalid context at ...) created by
- I just noticed this when reading old logs, I do not know how to reproduce …
01/04/10:
- 18:09 Changeset [5778] by
- Added "align" tool to map model coordinates to the workpiece.
- 13:05 Changeset [5777] by
- Fix small typo.
01/03/10:
- 01:36 Changeset [5776] by
- Added tooltips for editable status area items. All tools are now tipped. …
- 00:58 Changeset [5775] by
- Fped no longer needs "mak dep".
- 00:58 Changeset [5774] by
- The build process used dependencies only if they were explicitly …
- 00:27 Changeset [5773] by
- Yet more tooltips. This time, for all non-editable fields in the status …
01/02/10:
- 23:04 Changeset [5772] by
- When repeatedly clicking on a stack of items to cycle through the stack, a …
- 22:38 Changeset [5771] by
- When selecting a tool and then selecting an object, the tool still …
- 16:47 Changeset [5770] by
- Canvas tooltips now work. The problem was that expose events set the paint …
- 13:55 Changeset [5769] by
- More work on tooltips and a build fix. - Makefile: use PID in temporary …
- 12:21 Ticket #2325 (adventures in building Qi on Debian armel) created by
- First try with a fresh checkout of master: […] Ok, lets add …
01/01/10:
- 23:08 Changeset [5768] by
- Added tooltips to frame/items list.
- 15:59 Changeset [5767] by
- Added tooltips to all icons acting as buttons.
12/31/09:
- 17:24 Changeset [5766] by
- The comment of the previous commit contained a slight exaggeration: we did …
- 17:13 Changeset [5765] by
- One could add a new frame if a frame with an underscore as its name …
- 16:11 Changeset [5764] by
- When deleting the locked frame, icon and internal reference weren't reset. …
- 10:34 Changeset [5763] by
- When selecting an expression of an assignment and then selecting another …
12/27/09:
- 23:07 Changeset [5762] by
- Indicated more prominently that fped is licensed under the GPLv2 and …
- 21:55 Changeset [5761] by
- Added package name (transfig) of fig2dev
- 01:54 Ticket #2324 (resume_reason can show EINT01_GSM even if GSM is off) created by
- Steps to reproduce: 1) receive a call while the phone is in suspend 2) …
Note: See TracTimeline for information about the timeline view. | http://docs.openmoko.org/trac/timeline?from=2010-01-24T14%3A45%3A14Z%2B0100&precision=second | 2013-05-18T20:38:53 | CC-MAIN-2013-20 | 1368696382851 | [] | docs.openmoko.org |
Revision history of "M16 Templates More"
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 11:59, 7 December 2012 JoomlaWikiBot (Talk | contribs) deleted page M16 Templates More (Robot: Deleting all pages from category Candidates_for_deletion) | http://docs.joomla.org/index.php?title=M16_Templates_More&action=history | 2013-05-18T20:40:28 | CC-MAIN-2013-20 | 1368696382851 | [] | docs.joomla.org |
dircmp instances are built using this constructor:
['RCS', 'CVS', 'tags']. hide is a list of names to hide, and defaults to
[os.curdir, os.pardir].
The dircmp class provides the following methods:
sys.stdout) a comparison between a and b.
The dircmp offers a number of interesting attributes that may be used to get various bits of information about the directory trees being compared.
Note that via __getattr__() hooks, all attributes are computed lazilly, so there is no speed penalty if only those attributes which are lightweight to compute are used. | http://docs.python.org/release/2.3.2/lib/dircmp-objects.html | 2013-05-18T20:22:53 | CC-MAIN-2013-20 | 1368696382851 | [] | docs.python.org |
Attribute OnChange Event (Client API reference)
The
OnChange event occurs in the following situations:
- Data in a form field has changed and focus is lost. There is an exception to this behavior that applies to Two-Option (Boolean) fields that are formatted to use radio buttons or check boxes. In these cases the event occurs immediately.
- Data changes on the server are retrieved to update a field when the form is refreshed, such as after a record is saved.
- The attribute.fireOnchange method is used.
All fields support the
OnChange event. Data in the field is validated before and after the
OnChange event.
The
OnChange event does not occur if the field is changed programmatically using the attribute.setValue method. If you want event handlers for the
OnChange event to run after you set the value you must use the
formContext.data.entity attribute.fireOnchange method in your code.
Note
Although the Status field supports the
OnChange event, the field is read-only on the form so the event cannot occur through user interaction. Another script could cause this event to occur by using the fireOnchange method on the field.
Methods supported for this event
There are three methods you can use to work with the
OnChange event for an attribute:
Related topics
attributes (Client API reference)
Feedback | https://docs.microsoft.com/en-us/powerapps/developer/model-driven-apps/clientapi/reference/events/attribute-onchange | 2019-07-16T04:10:50 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.microsoft.com |
- Connect BI Tools >
- Connect from Microsoft Excel
Connect from Microsoft Excel¶
On this page
You can import data from a MongoDB collection into a Microsoft Excel spreadsheet with the MongoDB Connector for BI and an ODBC data connection.
Prerequisites¶
- Windows
- macOS
To connect Excel to the MongoDB Connector for BI, you must first create a system Data Source Name (DSN).
Connecting Excel to the MongoDB Connector for BI requires the following:
64-bit version of Excel. Run the following command to determine whether the 64-bit or 32-bit version of Excel is installed:
The following table lists the possible outputs of the command above and their respective meanings:
For information on upgrading to the 64-bit version of Excel, see Microsoft Support.
Note
Both the 64-bit and 32-bit versions of iODBC are included with the installer. If you use iODBC to test your DSN, you must use the 64-bit version of the application.
iODBC is not recommended for creating or modifying your Data Source Name (DSN). To create or modify your DSN, use the ODBC Manager application that is included with the MongoDB ODBC driver.
Create a Data Source Name (DSN).
Important
Excel requires the following settings in your Data Source Name (DSN) configuration:
- The
DATABASEkeyword must be specified in your DSN. If the
DATABASEkeyword is not set, Excel will not recognize any collections.
- TLS/SSL certificates must be stored in the
/Library/ODBC/directory. All TLS/SSL keywords in the DSN must point to the certificates in this directory.
Procedure¶
Before beginning this tutorial, make sure you have a running
mongosqld instance.
- Windows
- macOS
Enter Credentials¶
If you are running the BI Connector with authentication enabled, in the ensuing dialog enter the username and password used to connect to your BI Connector instance.
Note
When specifying a username, include the authentication
database for the user. For example,
salesadmin?source=admin.
If you are not running the BI Connector with authentication enabled, leave these fields blank.
Click Ok.
Select a Table¶
- In the left side of the dialog, click your server name to expand the list collections in your database.
- Select the collection from the list from which contains the data you want to import.
- To view your data before importing, click Run to run the generated SQL statement. Your data appears in the table below the statement.
- Click Return Data.
Import the Data¶
Select how you would like to import the data into Excel.
You can choose to import the data into:
- An Existing Sheet, specifying in which cell to begin the table.
- A New Sheet, automatically beginning the table in cell
A1.
- A PivotTable in a new sheet.
Click OK to complete the import process.
Example
The following image shows the results of importing data from
the
supplySales table into a new sheet:
Note
Excel for Mac may not properly display special characters, such as letters with accent marks. | https://docs.mongodb.com/bi-connector/master/connect/excel/ | 2019-07-16T05:18:39 | CC-MAIN-2019-30 | 1563195524502.23 | [array(['../../_images/excel-table-select-mac.png',
'Screenshot of the Table selection dialog'], dtype=object)
array(['../../_images/excel-import-data-final-example-mac.png',
'Screenshot of imported data in a new sheet'], dtype=object)] | docs.mongodb.com |
This extension provides Kalman filtering capabilities to Siddhi. This allows you to detect outliers of input data. Following are the functions of the Kalman Filter extension.
Kalman Filter function
This function uses measurements observed over time containing noise and other inaccuracies, and produces estimated values for the current measurement using Kalman algorithms. The parameters used are as follows.
measuredValue: The sequential change in the observed measurement. e.g., 40.695881
measuredChangingRate: The rate at which the measured change is taking place. e.g., The velocity with which the measured value is changed can be 0.003 meters per second.
measurementNoiseSD: The standard deviation of the noise. e.g., 0.01
timestamp: The time stamp of the time at which the measurement was carried out.
Overview
Content Tools
Activity | https://docs.wso2.com/display/CEP420/Kalman+Filter+Extension | 2019-07-16T04:09:08 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.wso2.com |
Official Release
Date – May 28, 2019
Download – Build 4.12.00
New Features
V-Ray
-
V-Ray
-
V-Ray
- | https://docs.chaosgroup.com/pages/viewpage.action?pageId=52072571 | 2019-07-16T03:54:37 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.chaosgroup.com |
System Menu
The System Menu includes tools to import and export data, install extensions, manage system caches and indexes, manage permissions, backups, system notifications, and custom variables.
The System menu is on the Admin sidebar, click System.
The Import and Export tools give you the ability to manage multiple records in a single operation. You can import new items, and also update, replace, and delete existing products and tax rates.
Manage integrations and extensions for your store.
Manage your system resources, including cache and index management, backups, and installation settings.
Magento uses roles and permissions to create different levels of access for Admin users, which gives you the ability to grant permission on a “need to know” basis to people who work on your site.
Other Settings
Manage the notifications in your inbox, create custom variables, and generate a new encryption key.
Provides access to Client and IPN logs, if enabled in the Developer Options section of the Amazon Pay configuration. | https://docs.magento.com/m2/ce/user_guide/system/system-menu.html | 2019-07-16T05:01:02 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.magento.com |
DSE Search architecture
An overview of DSE Search architecture.
In a distributed environment, the data is spread over multiple nodes. Deploy DSE Search nodes in their own datacenter to run DSE Search on all nodes.
Data is written to the database first, and then indexes are updated next.
When you update a table using CQL, the search index.
DSE Search terms
- A search index (formerly referred to as a search core)
- A collection
- One shard of a collection
See the following table for a mapping between database and DSE Search concepts.
How DSE Search works
- Each document in a search index is unique and contains a set of fields that adhere to a user-defined schema.
- The schema lists the field types and defines how they should be indexed.
- DSE Search maps search indexes to tables.
- Each table has a separate search index on a particular node.
- Solr documents are mapped to rows, and document fields to columns.
- A shard is indexed data for a subset of the data on the local node.
- The keyspace is a prefix for the name of the search index and has no counterpart in Solr.
- Search queries are routed to enough nodes to cover all token ranges.
- The query is sent to all token ranges replication, a node or search index contains more than one partition (shard) of table (collection) data. Unless the replication factor equals the number of cluster nodes, the node or search index contains only a portion of the data of the table or collection. | https://docs.datastax.com/en/dse/6.0/dse-dev/datastax_enterprise/dbArch/archSearch.html | 2019-07-16T04:34:08 | CC-MAIN-2019-30 | 1563195524502.23 | [array(['../images/srch_overview.png', None], dtype=object)] | docs.datastax.com |
Contents Now Platform Custom Business Applications Previous Topic Next Topic Enable a code review Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Enable a code review You can require a code review of all changes pushed to an instance. Navigate to Team Development > Properties. Select the Yes check box for If this property is set to Yes, code review is required before pushing to this instance (com.snc.teamdev.requires_codereview). Click Save. Setting this property adds the Code Review Requests module to the application menu and requires all changes pushed to this instance to remain in the Awaiting Code Review stage until someone in the Team Development Code Reviewers group approves them. Related tasksCreate an exclusion policyDefine a remote instanceSelect the parent instanceSet up an instance hierarchyRelated referenceAccess rights for developers On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/kingston-application-development/page/build/team-development/task/t_EnableCodeReview.html | 2019-07-16T04:46:55 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.servicenow.com |
Data Validation in AWS Snowball
Following, you'll find information on how Snowball validates data transfers, and the manual steps you can take to ensure data integrity during and after a job.
Topics
Checksum Validation of Transferred Data
When you copy a file from a local data source using the Snowball client or the Amazon S3 Adapter for Snowball, to the Snowball, a number of checksums are created. These checksums are used to automatically validate data as it's transferred.
At a high level, these checksums are created for each file (or for parts of large files). These checksums are never visible to you, nor are they available for download. The checksums are used to validate the integrity of your data throughout the transfer, and will ensure that your data is copied correctly.
When these checksums don't match, we won't import the associated data into Amazon S3.
Common Validation Errors
Validations errors can occur. Whenever there's a validation error, the corresponding data (a file or a part of a large file) is not written to the destination. The common causes for validation errors are as follows:
Attempting to copy symbolic links.
Attempting to copy files that are actively being modified. This will not result in a validation error, but it will cause the checksums to not match at the end of the transfer.
Attempting to copy whole files larger than 5 TB in size.
Attempting to copy part sizes larger than 512 MB in size.
Attempting to copy files to a Snowball that is already at full data storage capacity.
Attempting to copy files to a Snowball that doesn't follow the Object Key Naming Guidelines for Amazon S3.
Whenever any one of these validation errors occurs, it is logged. You can take steps to manually identify what files failed validation and why as described in the following sections:
Manual Data Validation for Snowball During Transfer – Outlines how to check for failed files while you still have the Snowball on-premises.
Manual Data Validation for Snowball After Import into Amazon S3 – Outlines how to check for failed files after your import job into Amazon S3 has ended.
Manual Data Validation for Snowball During Transfer
You can use manual validation to check that your data was successfully transferred to your device. You can also use manual validation if you receive an error after attempting to transfer data. Use the following section to find how to manually validate data on a Snowball.
Check the failed-files log – Snowball client
When you run the Snowball client
copy command, a log showing any files that
couldn't be transferred to the Snowball is generated. If you encounter an error during
data transfer, the path for the failed-files log will be printed to the terminal.
This log
is saved as a comma-separated values (.csv) file. Depending on your operating system,
you
find this log in one of the following locations:
Windows –
C:/Users/
<username>/AppData/Local/Temp/snowball-
<random-character-string>/failed-files
Linux –
/tmp/snowball-
<random-character-string>/failed-files
Mac –
/var/folders/gf/
<random-character-string>/<random-character-string>/snowball-
7464536051505188504/failed-files
Use the --verbose option for the Snowball client copy command
When you run the Snowball client
copy command, you can use the
--verbose option to list all the files that are transferred to the
Snowball. You can use this list to validate the content that was transferred to the
Snowball.
Check the logs – Amazon S3 Adapter for Snowball
When you run the Amazon S3 Adapter for Snowball to copy data with the AWS CLI, logs are generated. These logs are saved in the following locations, depending on your file system:
Windows –
C:/Users/
<username>/.aws/snowball/logs/snowball_adapter_
<year_month_date_hour>
Linux –
/home/.aws/snowball/logs/snowball_adapter_
<year_month_date_hour>
Mac –
/Users/
<username>/.aws/snowball/logs/snowball_adapter_
<year_month_date_hour>
Use the --stopOnError copy option
If you're transferring with the Snowball client, you can use this option to stop the transfer process in the event a file fails. This option stops the copy on any failure so you can address that failure before continuing the copy operation. For more information, see Options for the snowball cp Command.
Run the Snowball client's validate command
The Snowball client's
snowball validate command can validate that the files on the
Snowball were all completely copied over to the Snowball. If you specify a path, then
this command validates the content pointed to by that path and its subdirectories.
This
command lists files that are currently in the process of being transferred as incomplete
for
their transfer status. For more information on the validate command, see Validate Command for the Snowball
Client.
Manual Data Validation for Snowball After Import into Amazon S3
After an import job has completed, you have several options to manually validate the data in Amazon S3, as described following.
Check job completion report and associated logs
Whenever data is imported into or exported out of Amazon S3, you get a downloadable PDF job report. For import jobs, this report becomes available at the end of the import process. For more information, see Getting Your Job Completion Report and Logs in the Console.
S3 inventory
If you transferred a huge amount of data into Amazon S3 in multiple jobs, going through each job completion report might not be an efficient use of time. Instead, you can get an inventory of all the objects in one or more Amazon S3 buckets. Amazon S3 inventory provides a .csv file showing your objects and their corresponding metadata on a daily or weekly basis. This file covers objects for an Amazon S3 bucket or a shared prefix (that is, objects that have names that begin with a common string).
Once you have the inventory of the Amazon S3 buckets that you've imported data into, you can easily compare it against the files that you transferred on your source data location. In this way, you can quickly identify what files where not transferred.
Use the Amazon S3 sync command
If your workstation can connect to the internet, you can do a final validation of
all
your transferred files by running the AWS CLI command
aws s3 sync. This command
syncs directories and S3 prefixes. This command recursively copies new and updated
files
from the source directory to the destination. For more information, see.
Important
If you specify your local storage as the destination for this command, make sure that you have a backup of the files you sync against. These files are overwritten by the contents in the specified Amazon S3 source. | https://docs.aws.amazon.com/snowball/latest/ug/validation.html | 2019-07-16T04:47:22 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.aws.amazon.com |
The first step in adding a DNS record is finding out who is your DNS provider for your domain name. Usually it is your domain registrar or your hosting company. You can use DNS Dig Tool to find out your DNS provider easily.
As you see from the screenshot above for our domain name we use Cloudflare. For your case it may be domaincontrol.com (GoDaddy), systemdns.com or googledomains.com if you bought your domain with Shopify, or some name related to your hosting company which will hint who is your DNS provider.
Below you will find the steps of adding CNAME records in Cloudflare, GoDaddy, Shopify and hostings with cPanel. GoDaddy
- Log in to your account at godaddy.com by clicking the My Account tab.
- Under the All Domains section, find your domain you want to configure and click on the domain name link to open the domain settings page.
- Open the Manage DNS link on the bottom of the domain settings page.
- Click on the Add button under the Records list in DNS Manager.
- Set Type to CNAME.
- Set Host to the language code you wish to add.
- Set Points to to GTranslate server name mentioned in your instructions email.
- Click Save.
Adding a CNAME record in hosting control panel (cPanel)
- Login into your hosting panel
- Open the DNS Simple Zone Editor
- Under "Add a CNAME Record" section set Name to the language code you wish to add and CNAME to GTranslate server name mentioned in your instructions email.
- Click Add CNAME Record button.
Adding a CNAME record in Shopify
If you bought your domain name from Shopify directly you will need to follow the steps below:
- From your Shopify admin, go to Online Store → Domains.
- In the domains list section, click Manage.
- Click DNS Settings on top of the screen.
- Click Add custom record and select the CNAME record type.
- Set Name to the language code you want to add and Points to to GTranslate server name mentioned in your GTranslate app.
- Click Confirm.
Add a CNAME record (host-specific steps)
You can find more info about adding CNAME records for host specific cases on
Verifying that the CNAME record has been added
To verify that the CNAME record was successfully added you can use DNS Dig Tool.
If everything is done properly you will see the GTranslate server name in the Answer section.
Note: If you are having issues finding your DNS manager or adding CNAME records in your DNS manager, feel free to contact our Live chat and we will help you. | https://docs.gtranslate.io/en/articles/1348901-how-to-add-cname-records-in-dns-manager | 2019-07-16T04:48:30 | CC-MAIN-2019-30 | 1563195524502.23 | [array(['https://downloads.intercomcdn.com/i/o/40593916/aa7d37c354b306ed5ddb4dba/cloudflare_add_cname.gif',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/40596304/d2aec5ecedb4cf3fcb2bed8a/godaddy_cname_add.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/40597666/9b69e54f99f7abb192faf3ab/cpanel_cname_add.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/40600379/7ed7a7176d8e511771b42f5f/shopify_cname_add.png',
None], dtype=object) ] | docs.gtranslate.io |
The Alfresco Full Text Search (FTS) query text can be used standalone or it can be embedded in CMIS-SQL using the contains() predicate function. The CMIS specification supports a subset of FTS. The full power of FTS can not be used and, at the same time, maintain portability between CMIS repositories.
%(cm:name cm:title cm:description ia:whatEvent ia:descriptionEvent lnk:title lnk:description TEXT)
When FTS is embedded in CMIS-SQL, only the CMIS-SQL-style property identifiers (cmis:name) and aliases, CMIS-SQL column aliases, and the special fields listed can be used to identify fields. The SQL query defines tables and table aliases after from and join clauses. If the SQL query references more than one table, the contains() function must specify a single table to use by its alias. All properties in the embedded FTS query are added to this table and all column aliases used in the FTS query must refer to the same table. For a single table, the table alias is not required as part of the contains() function.
When FTS is used standalone, fields can also be identified using prefix:local-name and {uri}local-name styles. | https://docs.alfresco.com/5.2/concepts/rm-searchsyntax-intro.html | 2019-07-16T03:53:49 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.alfresco.com |
Introduction to library
The library is where you manage your content. You can upload image & video files, as well as create cards and add links to external content such as web pages.
Note
See our Supported media player comparison for Appspace App based devices for all the file types that Appspace supports.
The library and user groups
If you have organized users into different user groups, each user group can have its own library; this allows those users to share content privately amongst their group. A user belonging to more than one user group will be able to see all libraries available to them. A common library also allows users to share content with every one within the account.
Helpful tip
If you drag and drop content from one folder to another in the same library, this moves the content. If you drag and drop it to a folder in another library, the content is copied.
Note
To view privileges required to configure the Library, please refer to the table in the Introduction to user management article. | https://docs.appspace.com/appspace/6.1/appspace/library/introduction/ | 2019-07-16T05:06:30 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.appspace.com |
Hive Warehouse Connector supported types
The Hive Warehouse Connector maps most Apache Hive types to Apache Spark types and vice versa, but there are a few exceptions that you must manage.
Spark-Hive supported types mapping
The following types are supported for access through HiveWareHouseConnector library:
Notes:
- * StringType (Spark) and String, Varchar (Hive)
A Hive String or Varchar column is converted to a Spark StringType column. When a Spark StringType column has maxLength metadata, it is converted to a Hive Varchar column; otherwise, it is converted to a Hive String column.
- ** Timestamp (Hive)
The Hive Timestamp column loses submicrosecond precision when converted to a Spark TimestampType column, because a Spark TimestampType column has microsecond precision, while a Hive Timestamp column has nanosecond precision.
Hive timestamps are interpreted to be in
UTCtime. When reading data from Hive, timestamps are adjusted according to the local timezone of the Spark session. For example, if Spark is running in the
America/New_Yorktimezone, a Hive timestamp
2018-06-21 09:00:00is imported into Spark as
2018-06-21 05:00:00. This is due to the 4-hour time difference between
America/New_Yorkand
UTC. | https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/integrating-hive/content/hive_hivewarehouseconnector_supported_types.html | 2019-07-16T05:21:57 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.hortonworks.com |
Contents Now Platform Administration Previous Topic Next Topic Reference default many-to-many relationships Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Reference default many-to-many relationships Some many-to-many relationships are defined by default. To reference many-to-many relationships that are available in the base system, administrators can enter sys_collection.list in the navigation filter. Note: Only use this table to view many-to-many relationships in the base system. To create a new relationship, always use the Many-to-Many Definitions table. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/jakarta-platform-administration/page/administer/table-administration/reference/r_RefDefaultManyToManyRels.html | 2019-07-16T04:44:52 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.servicenow.com |
Contents Security Operations Previous Topic Next Topic Lookup source rate limits Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Lookup source rate limits You can define the rate that different types of lookups are performed to limit the number of requests that are sent to an external lookup source. After you have defined rate limits, you can apply them to different lookup sources. Define rate limitsYou can define the rate that different types of lookups are performed to balance the load in your lookup queue. Conditions defined in the rate limit determine whether the rate limits are applied to queued entries.Apply lookup rate limits to lookup sourcesAfter you have defined lookup rate limits using Lookup source rate limits, you can apply rate limits to specific lookup sources.Related tasksDefine supported lookup typesAdd a lookup source On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/jakarta-security-management/page/product/threat-intelligence/concept/c_ScanRateLimits.html | 2019-07-16T04:57:36 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.servicenow.com |
All users are given access to a 30-day free trial of Xara Cloud Premium subscription.
After the 30 days the premium trial will be finished, and your account will be switched to the free Starter Subscription. You do not lose access to Xara Cloud. You do not lose any documents. All that happens when the Premium trial is complete is that some advance Premium features will not be available. You keep using Xara Cloud as you have been.
Need an extra week to try Premium? Contact us in the product.
The Starter Subscription can be upgraded to a Premium subscription at anytime. Simply click the Upgrade button in the Settings area.
Learn more about the Xara subscription options: | https://docs.xara.com/en/articles/1895752-what-happen-when-the-30-day-trial-is-over | 2019-07-16T05:04:30 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.xara.com |
Migrating from pre-2.8
When migrating from a version older than ArangoDB 2.8 please note that starting with ArangoDB 2.8 the behaviour of the
require function more closely mimics the behaviour observed in Node.js and module bundlers for browsers, e.g.:
In a file
/routes/examples.js (relative to the root folder of the service):
require('./my-module')will be attempted to be resolved in the following order:
/routes/my-module(relative to service root)
/routes/my-module.js(relative to service root)
/routes/my-module.json(relative to service root)
/routes/my-module/index.js(relative to service root)
/routes/my-module/index.json(relative to service root)
require('lodash')will be attempted to be resolved in the following order:
/routes/node_modules/lodash(relative to service root)
/node_modules/lodash(relative to service root)
- ArangoDB module
lodash
- Node compatibility module
lodash
- Bundled NPM module
lodash
require('/abs/path')will be attempted to be resolved in the following order:
/abs/path(relative to file system root)
/abs/path.js(relative to file system root)
/abs/path.json(relative to file system root)
/abs/path/index.js(relative to file system root)
/abs/path/index.json(relative to file system root)
This behaviour is incompatible with the source code generated by the Foxx generator in the web interface before ArangoDB 2.8.
Note: The
org/arangodb module is aliased to the new name
@arangodb in ArangoDB 3.0.0 and the
@arangodb module was aliased to the old name
org/arangodb in ArangoDB 2.8.0. Either one will work in 2.8 and 3.0 but outside of legacy services you should use
@arangodb going forward.
Foxx queue
In ArangoDB 2.6 Foxx introduced a new way to define queued jobs using Foxx scripts to replace the function-based job type definitions which were causing problems when restarting the server. The function-based jobs have been removed in 2.7 and are no longer supported at all.
CoffeeScript
ArangoDB 3.0 no longer provides built-in support for CoffeeScript source files, even in legacy compatibility mode. If you want to use an alternative language like CoffeeScript, make sure to pre-compile the raw source files to JavaScript and use the compiled JavaScript files in the service.
The request module
The
@arangodb/request module when used with the
json option previously overwrote the string in the
body property of the response object of the response with the parsed JSON body. In 2.8 this was changed so the parsed JSON body is added as the
json property of the response object in addition to overwriting the
body property. In 3.0 and later (including legacy compatibility mode) the
body property is no longer overwritten and must use the
json property instead. Note that this only affects code using the
json option when making the request.
Bundled NPM modules
The bundled NPM modules have been upgraded and may include backwards-incompatible changes, especially the API of
joi has changed several times. If in doubt you should bundle your own versions of these modules to ensure specific versions will be used.
The utility module
lodash is now available and should be used instead of
underscore, but both modules will continue to be provided. | https://docs.arangodb.com/3.1/Manual/Foxx/Migrating2x/Wayback.html | 2017-05-22T23:14:01 | CC-MAIN-2017-22 | 1495463607242.32 | [] | docs.arangodb.com |
Azure Service Broker
This documentation describes the Microsoft Azure Service Broker for Pivotal Cloud Foundry (PCF).
Azure Service Broker extends Cloud Foundry with Azure-managed services that can be consumed by applications. It exposes services in the Marketplace, manages the provisioning and de-provisioning of service instances, and provides credentials for an application to consume the resource.
Overview
Azure Service Broker currently supports the following Azure services:
- Azure Storage
- Azure Redis Cache
- Azure DocumentDB
- Azure Service Bus
- Azure Event Hubs
- Azure SQL Database
Product Snapshot
The following table provides version and version-support information about Azure Service Broker:
Requirements
Azure Service Broker has the following requirements:
PCF v1.8 or later
Azure Subscription and service principal
A SQL database
Feedback
Please provide any bugs, feature requests, or questions to the Pivotal Cloud Foundry Feedback list or send an email to Azure Service Broker Support Team.
License
Apache License Version 2.0 | https://docs.pivotal.io/partners/azure-sb/index.html | 2017-05-22T23:26:31 | CC-MAIN-2017-22 | 1495463607242.32 | [] | docs.pivotal.io |
Exec Binding
Execute commands as you would enter on the command line, returning the output (possibly transformed) as the bound item’s state. Also, execute command lines in response to commands sent to bound items.
Considerations
- The user under which you are running openHAB should have the necessary permissions in order to execute your command lines.
- When using the
sshcommand, you should use private key authorization, since the password cannot be read from the command line.
- There is also a binding specifically for openHAB 2 here.
Binding Configuration
This binding does not have a configuration.
Item Configuration
Update item states
When updating the states of items based on executing a command line (an “in” binding):
exec="<[<commandLine to execute>:<refreshintervalinmilliseconds>:(<transformationrule>)]"
where:
<commandLine to execute>is the command line to execute. See Formatting and Splitting sections below.
<refreshintervalinmilliseconds>is the frequency at which to repeatedly execute the command line.
<transformationrule>is optional, and can be used to transform the string returned from the command before updating the state of the item.
Example item:
Number Temperature "Ext. Temp. [%.1f°C]" { exec="<[curl -s]" }
Sending commands
When executing a command line in response to the item receiving a command (an “out” binding):
exec=">[<openHAB-command>:<commandLine to execute>] (>[<openHAB-command>:<commandLine to execute>]) (>[...])"
where:
<openHAB-command>is the openHAB command that will trigger the execution of the command line. Can be
ON,
OFF,
INCREASE, etc., or the special wildcard command ‘
*’ which is called in cases where no direct match could be found
<commandLine to execute>is the command to execute. See Formatting and Splitting sections below.
Example item:
Number SoundEffect "Play Sound [%d]" { exec=">[1:open /mp3/gong.mp3] >[2:open /mp3/greeting.mp3] >[*:open /mp3/generic.mp3]" }
Old Format
Deprecated format (do not use; retained for backward compatibility only):
exec="<openHAB-command>:<commandLine to execute>[,<openHAB-command>:<commandLine to execute>][,...]"
Formatting
You can substitute formatted values into the command using the syntax described here.
- the current date, like
%1$tY-%1$tm-%1$td.
- the current command or state, like
%2$s(out bindings only)
- the current item name (like
%3$s).
Splitting
Sometimes the
<commandLine to execute> isn’t executed properly. In that case, another exec-method can be used. To accomplish this please use the special delimiter
@@ to split command line parameters.
Examples
Turn a Computer ON and OFF
On possible useage is to turn a computer on or off. The Wake-on-LAN binding could be bound to a Switch item, so that when the switch receives an ON command, A Wake-on-LAN message is sent to wake up the computer. When the switch item receives an OFF command, it will call the Exec binding to connect via ssh and issue the shutdown command. Here is the example item:
Switch MyLinuxPC "My Linux PC" {[OFF:ssh [email protected] shutdown -p now]" }
KNX Bus to Exec Command
The example below combines three bindings to incorporate the following behavior: query the current state of the NAS with the given IP address. When receiving an OFF command over KNX or the user switches to OFF manually then send the given command line via the exec binding.
Switch Network_NAS "NAS" (Network, Status) {[OFF:ssh [email protected] shutdown -p now]" }
More Examples[1:open /mp3/gong.mp3] >[2:open /mp3/greeting.mp3] >[*:open /mp3/generic.mp3]" exec="<[curl -s]" exec="<[/bin/sh@@-c@@uptime | awk '{ print $10 }':60000:REGEX((.*?))]" exec="<[execute.bat %1$tY-%1$tm-%1$td %2$s %3$s:60000:REGEX((.*?))]" exec="<[php ./configurations/scripts/script.php:60000:REGEX((.*?))]" exec="<[/bin/sh@@-c@@uptime | awk '{ print $10 }':]" // deprecated format exec="OFF:ssh [email protected] shutdown -p now" exec="OFF:some command, ON:'some other\, more \'complex\' \\command\\ to execute', *:fallback command" exec="1:open /path/to/my/mp3/gong.mp3, 2:open /path/to/my/mp3/greeting.mp3, *:open /path/to/my/mp3/generic.mp3" | http://docs.openhab.org/addons/bindings/exec1/readme.html | 2017-05-22T23:33:00 | CC-MAIN-2017-22 | 1495463607242.32 | [] | docs.openhab.org |
SQL.
Related Tasks
Use the following topics to get started with SQL Server utility. | https://docs.microsoft.com/en-us/sql/relational-databases/manage/sql-server-utility-features-and-tasks | 2017-05-23T00:49:29 | CC-MAIN-2017-22 | 1495463607242.32 | [] | docs.microsoft.com |
SQLite Extensions Reference¶
To make it easier to analyze log data from within lnav, there are several built-in extensions that provide extra functions and collators beyond those provided by SQLite. The majority of the functions are from the extensions-functions.c file available from the sqlite.org web site.
Tip: You can include a SQLite database file on the command-line and use lnav‘s interface to perform queries. The database will be attached with a name based on the database file name.
Commands¶
A SQL command is an internal macro implemented by lnav.
- .schema - Open the schema view. This view contains a dump of the schema for the internal tables and any tables in attached databases.
Environment¶
Environment variables can be accessed in queries using the usual syntax of “$VAR_NAME”. For example, to read the value of the “USER” variable, you can write:
;SELECT $USER;
Math¶
Basic mathematical functions:
- cos(n)
- sin(n)
- tan(n)
- cot(n)
- cosh(n)
- sinh(n)
- coth(n)
- acos(n)
- asin(n)
- atan(r1,r2)
- atan2(r1,r2)
- exp(n)
- log(n)
- log10(n)
- power(x,y)
- sign(n) - Return one of 3 possibilities +1,0 or -1 when the argument is respectively positive, 0 or negative.
- sqrt(n)
- square(n)
- ceil(n)
- floor(n)
- pi()
- degrees - Convert radians to degrees
- radians - Convert degrees to radians
Aggregate functions:
- stddev
- variance
- mode
- median
- lower_quartile
- upper_quartile
String¶
Additional string comparison and manipulation functions:
- difference(s1,s2) - Computes the number of different characters between the soundex value fo 2 strings.
- replicate(s,n) - Given a string (s) in the first argument and an integer (n) in the second returns the string that constains s contatenated n times.
- proper(s) - Ensures that the words in the given string have their first letter capitalized and the following letters are lower case.
- charindex(s1,s2), charindex(s1,s2,n) - Given 2 input strings (s1,s2) and an integer (n) searches from the nth character for the string s1. Returns the position where the match occured. Characters are counted from 1. 0 is returned when no match occurs.
- leftstr(s,n) - Given a string (s) and an integer (n) returns the n leftmost (UTF-8) characters if the string has a length<=n or is NULL this function is NOP.
- rightstr(s,n) - Given a string (s) and an integer (n) returns the n rightmost (UTF-8) characters if the string has a length<=n or is NULL this function is NOP
- reverse(s) - Given a string returns the same string but with the characters in reverse order.
- padl(s,n) - Given an input string (s) and an integer (n) adds spaces at the beginning of (s) until it has a length of n characters. When s has a length >=n it’s a NOP. padl(NULL) = NULL
- padr(s,n) - Given an input string (s) and an integer (n) appends spaces at the end of s until it has a length of n characters. When s has a length >=n it’s a NOP. padr(NULL) = NULL
- padc(s,n) - Given an input string (s) and an integer (n) appends spaces at the end of s and adds spaces at the begining of s until it has a length of n characters. Tries to add has many characters at the left as at the right. When s has a length >=n it’s a NOP. padc(NULL) = NULL
- strfilter(s1,s2) - Given 2 string (s1,s2) returns the string s1 with the characters NOT in s2 removed assumes strings are UTF-8 encoded.
- regexp(re,s) - Return 1 if the regular expression ‘re’ matches the given string.
- regexp_replace(str, re, repl) - Replace the portions of the given string that match the regular expression with the replacement string. NOTE: The arguments for the string and the regular expression in this function are reversed from the plain regexp() function. This is to be somewhat compatible with functions in other database implementations.
- startswith(s1,prefix) - Given a string and prefix, return 1 if the string starts with the given prefix.
- endswith(s1,suffix) - Given a string and suffix, return 1 if the string ends with the given suffix.
- regexp_match(re,str) - Match and extract values from a string using a regular expression. The “re” argument should be a PCRE with captures. If there is a single capture, that captured value will be directly returned. If there is more than one capture, a JSON object will be returned with field names matching the named capture groups or ‘col_N’ where ‘N’ is the index of the capture. If the expression does not match the string, NULL is returned.
- extract(str) - Parse and extract values from a string using the same algorithm as the logline table (see Extracting Data). The discovered data is returned as a JSON-object that you can do further processing on.
File Paths¶
File path manipulation functions:
- basename(s) - Return the file name part of a path.
- dirname(s) - Return the directory part of a path.
- joinpath(s1,s2,...) - Return the arguments joined together into a path.
Networking¶
Network information functions:
- gethostbyname - Convert a host name into an IP address. The host name could not be resolved, the input value will be returned.
- gethostbyaddr - Convert an IPv4/IPv6 address into a host name. If the reverse lookup fails, the input value will be returned.
JSON¶
JSON functions:
- jget(json, json_ptr) - Get the value from the JSON-encoded string in first argument that is referred to by the JSON-Pointer in the second.
- json_group_object(key0, value0, ... keyN, valueN) - An aggregate function that creates a JSON-encoded object from the key value pairs given as arguments.
- json_group_array(value0, ... valueN) - An aggregate function that creates a JSON-encoded array from the values given as arguments.
Time¶
Time functions:
- timeslice(t, s) - Given a time stamp (t) and a time slice (s), return a timestamp for the bucket of time that the timestamp falls in. For example, with the timestamp “2015-03-01 11:02:00’ and slice ‘5min’ the returned value will be ‘2015-03-01 11:00:00’. This function can be useful when trying to group together log messages into buckets.
Internal State¶
The following functions can be used to access lnav‘s internal state:
- log_top_line() - Return the line number at the top of the log view.
- log_top_datetime() - Return the timestamp of the line at the top of the log view.
Collators¶
- naturalcase - Compare strings “naturally” so that number values in the string are compared based on their numeric value and not their character values. For example, “foo10” would be considered greater than “foo2”.
- naturalnocase - The same as naturalcase, but case-insensitive.
- ipaddress - Compare IPv4/IPv6 addresses. | http://lnav.readthedocs.io/en/latest/sqlext.html | 2017-05-22T23:07:08 | CC-MAIN-2017-22 | 1495463607242.32 | [] | lnav.readthedocs.io |
The Urban Airship Xamarin component provides full bindings to the Urban Airship SDK for use in Xamarin Studio. With this component, Xamarin developers can target both iOS and Android devices with access to the full scope of the Urban Airship SDK, all while remaining in the C#/.NET ecosystem.
We provide two packages:
- Native Bindings: The native bindings contain all of the functionality of the iOS/Android SDKs, but provide no cross platform interface.
- Portable Client Library (PCL): The portable client library exposes a common subset of functionality between the iOS and Android SDKs. The PCL can be used within shared codebases (e.g., a Xamarin Forms app).
Resources
Setup
Before you begin, set up Push and any other Urban Airship features for iOS and Android. The Xamarin bindings, like the SDKs that they wrap, are platform-specific, so this would be a good time to familiarize yourself with the SDK APIs and features for each platform you wish to target.
Installation
Installing the Urban Airship components is a quick and easy process, seamlessly integrated into Xamarin Studio. You have two installation options:
- Native bindings: If you are only working with one platform, or if there is no reason for you to have a shared codebase between your platform projects, this may be an appropriate option.
- Portable client library + native bindings: If you are working with multiple platforms and you have a shared codebase (e.g., a Forms app), this may be an appropriate option. You can use the PCL in the shared codebase, while the native bindings can handle platform-specific calls in each platform project.
If you choose to install the PCL, note that it must be installed in each platform-specific project.
Both components can be installed via the NuGet package manager, and the native bindings can be found in the component store as well.
NuGet
Native Bindings:
PM> Install-Package UrbanAirship
Portable Client Library:
PM> Install-Package UrbanAirship.Portable
Double click the Packages folder in the Solution Explorer. With
nuget.org
selected as the source, search for Urban Airship SDK. Choose either the
portable library or native bindings, and select Add Package.
Xamarin Component Store (Native Bindings Only)
In the Solution Explorer, double-click the Components folder under your app’s project. This will bring up a screen with a button inviting you to Get More Components. Click this button to bring up a window for the Xamarin Component Store, and search for Urban Airship SDK. Navigate to the component’s details page, and click the Install button.
Xamarin Studio will automatically download and configure all DLLs and package dependencies. Once the installation is complete, you can move on to the integration steps for your target platform.
iOS Integration
Airship Config
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" ""> <plist version="1.0"> <dict> <key>detectProvisioningMode</key> <true/> >
Provide
AirshipConfig.plist file with the application’s configuration. In order
for this file to be visible to the SDK during TakeOff, be sure that its
BuildAction is set to “Bundle Resource” in your app project.
TakeOff
[Register ("AppDelegate")] public class AppDelegate : UIApplicationDelegate { public override bool FinishedLaunching (UIApplication application, NSDictionary launchOptions) { UAirship.TakeOff (); // Configure airship here return true; } }
Note that if the
TakeOff process fails due to improper or missing configuration, the shared
UAirship instance will be null. The Urban Airship SDK always logs implementation errors at
high visibility.
The Urban Airship SDK requires only a single entry point in the app
delegate, known as takeOff. Inside your application delegate’s
FinishedLaunching method, initialize a shared UAirship instance by
calling
UAirship takeOff
.
This will bootstrap the SDK and look for settings specified in the
AirshipConfig.plist config file.
Enabling User notifications
UAirship.Push.UserPushNotificationsEnabled = true;.
Android Integration
Airship Config
[Register("com.example.SampleAutopilot")] public class SampleAutopilot : Autopilot { public override void OnAirshipReady(UAirship airship) { // perform any post takeOff airship customizations } public override AirshipConfigOptions CreateAirshipConfigOptions(Context context) { /* Optionally set your config at runtime AirshipConfigOptions options = new AirshipConfigOptions.Builder() .SetInProduction(!BuildConfig.DEBUG) .SetDevelopmentAppKey("Your Development App Key") .SetDevelopmentAppSecret("Your Development App Secret") .SetProductionAppKey("Your Production App Key") .SetProductionAppSecret("Your Production App Secret") .SetNotificationAccentColor(ContextCompat.getColor(this, R.color.color_accent)) .SetNotificationIcon(R.drawable.ic_notification) .Build(); return options; */ return base.CreateAirshipConfigOptions(context); } }
Assemblyinfo.cs
[assembly: MetaData("com.urbanairship.autopilot", Value = "sample.SampleAutopilot")]
Create a class that extends Autopilot. Register the autopilot in the
Assemblyinfo.cs
to generate the required metadata in the AndroidManifest.xml file.
Enable user notifications:
UAirship.Shared().PushManager.Us.
Native Bindings
Using the native binding libraries is similar to using either the Android or iOS SDKs. Below we provide a simple comparison between setting a named user ID in the native SDK and binding library. In general, the two changes you will notice between the bindings and SDKs are:
- Method calls are generally capitalized.
- Getters/setters are generally converted into properties.
Android
Native Java Call
// Set the named user ID UAirship.shared().getNamedUser().setId("NamedUserID");
Binding Library:
// Set the named user ID UAirship.Shared().NamedUser.Id = "NamedUserID"
For more information on the Android SDK, please see the Android platform documentation
iOS
Native Objective-C Call:
// Set the named user ID [UAirship namedUser].identifier = "NamedUserId"
Binding Library:
// Set the named user ID UAirship.NamedUser.Identifier = "NamedUserID"
For more information on the iOS SDK, please see the iOS platform documentation
Portable Class Library
Add Device Tags:
Airship.Instance.EditDeviceTags() .AddTag('tag1') .AddTag('tag2') .RemoveTag('tag3') .Apply();
Add Custom Event:
CustomEvent customEvent = new CustomEvent(); customEvent.EventName = "purchased"; customEvent.EventValue = 123.45; customEvent.TransactionId = "xxxxxxxxx"; customEvent.InteractionId = "your.store/us/en_us/pd/pgid-10978234"; customEvent.InteractionType = "url"; customEvent.AddProperty("category", "shoes"); Airship.Instance.AddCustomEvent(customEvent);
Associate Identifier:
Airship.Instance.AssociateIdentifier("customKey", "customIdentifier");
Display Message Center:
Airship.Instance.DisplayMessageCenter();
Edit Named User Tag Groups:
Airship.Instance.EditNamedUserTagGroups() .AddTag("tag1", "group1") .AddTag("tag2", "group1") .RemoveTag("tag3", "group2") .SetTag("pizza", "loyalty") .Apply();
Edit Channel Tag Groups:
Airship.Instance.EditChannelTagGroups() .AddTag("tag1", "group1") .AddTag("tag2", "group1") .RemoveTag("tag3", "group2") .SetTag("pizza", "loyalty") .Apply();
Properties:
// Properties with getters and setters Airship.Instance.UserNotificationsEnabled = true; Airship.Instance.LocationEnabled = true; Airship.Instance.BackgroundLocationEnabled = true; Airship.Instance.NamedUser = "namedUser123"; // Properties with getters only List<string> tags = Airship.Instance.Tags; string channelId = Airship.Instance.ChannelId; int count = Airship.Instance.MessageCenterCount; int unreadCount = Airship.Instance.MessageCenterCount;
The portable class library (PCL) provides a unified interface for common SDK calls, allowing users to place common code in a shared location. This is ideal for working with Xamarin Forms – simply place the portable library into both of your platform-specific projects, follow the integration instructions detailed above for each platform, and all of these calls should be available from your shared codebase.
Because the PCL currently has no shared interface for initializing
the app (i.e., calling
TakeOff), you must install the native
bindings into each platform-specific project.
All of these calls are accessible through the
Airship class, found in the
UrbanAirship.Portable namespace. Full documentation for the PCL can be found
here. | https://docs.urbanairship.com/platform/xamarin/ | 2017-05-22T23:08:25 | CC-MAIN-2017-22 | 1495463607242.32 | [] | docs.urbanairship.com |
New in version 2.1.
Deprecated in 2.3. Use nclu instead.
# Example playbook entries using the cl_interface_policy module. - name: shows types of interface ranges supported cl_interface_policy: allowed: "lo eth0 swp1-9, swp11, swp12-13s0, swp12-30s1, swp12-30s2, bond0-12"
Common return values are documented here Return Values, the following are the fields unique to this module:
Note
For help in developing on modules, should you be so inclined, please read Community Information & Contributing, Testing Ansible and Developing Modules. | http://docs.ansible.com/ansible/cl_interface_policy_module.html | 2017-05-22T23:12:05 | CC-MAIN-2017-22 | 1495463607242.32 | [] | docs.ansible.com |
🔗Query HTTP RESTful API
Use the CDAP Query HTTP RESTful API to submit SQL-like queries over datasets. Queries are processed asynchronously; to obtain query results, perform these steps:
- first, submit the query;
- then poll for the query's status until it is finished;
- once finished, retrieve the result schema and the results;
- finally, close the query to free the resources that it holds.
Additional details on querying can be found in the Developers' Manual: Data Exploration.Submitting a Query
To submit a SQL query, post the query string to the
queries URL:
POST /v3/namespaces/<namespace-id>/data/explore/queries
The body of the request must contain a JSON string of the form:
{ "query":"<SQL-query-string>" }
where
SQL-query-string is the actual SQL query.
If you are running a version of Hive that uses reserved keywords, and a column in your query is a Hive reserved keyword, you must enclose the column name in backticks.
For example:
{ "query":"select `date` from stream_events" }
HTTP Responses
If the query execution was successfully initiated, the body of the response will contain a handle that can be used to identify the query in subsequent requests:
{ "handle":"<query-handle>" }
Example
🔗Status of a Query
The status of a query is obtained using a HTTP GET request to the query's URL:
GET /v3/data/explore/queries/<query-handle>/status
Note: this endpoint is not namespaced, as all query-handles are globally unique.
HTTP Responses
If the query exists, the body will contain the status of its execution and whether the query has a results set:
{ "status":"<status-code>", "hasResults":<boolean> }
Status can be one of the following:
INITIALIZED,
RUNNING,
FINISHED,
CANCELED,
CLOSED,
ERROR,
UNKNOWN, and
PENDING.
Example
🔗Obtaining the Result Schema
If the query's status is
FINISHED and it has results, you can obtain the schema of the results:
GET /v3/data/explore/queries/<query-handle>/schema
Note: this endpoint is not namespaced, as all query-handles are globally unique.
HTTP Responses
The query's result schema is returned in a JSON body as a list of columns, each given by its name, type and position; if the query has no result set, this list is empty:
[ {"name":"<name>", "type":"<type>", "position":<int>}, ... ]
The type of each column is a data type as defined in the Hive language manual.
Example
🔗Retrieving Query Results
Query results can be retrieved in batches after the query is finished, optionally specifying the batch size in the body of the request:
POST /v3/data/explore/queries/<query-handle>/next
Note: this endpoint is not namespaced, as all query-handles are globally unique.
The body of the request can contain a JSON string specifying the batch size:
{ "size":<int> }
If the batch size is not specified, the default is 20.
HTTP Responses
The results are returned in a JSON body as a list of columns, each given as a structure containing a list of column values:
[ { "columns": [ <value-1>, <value-2>, ..., ] }, ... ]
The value at each position has the type that was returned in the result schema for that position.
For example, if the returned type was
INT, then the value will be an integer literal,
whereas for
STRING or
VARCHAR the value will be a string literal.
Repeat the query to retrieve subsequent results. If all results of the query have already been retrieved, then the returned list is empty.
Example
🔗Closing a Query
The query can be closed by issuing an HTTP DELETE against its URL:
DELETE /v3/data/explore/queries/<query-handle>
This frees all resources that are held by this query.
Note: this endpoint is not namespaced, as all query-handles are globally unique.
HTTP Responses
Example
🔗List of Queries
To return a list of queries, use:
GET /v3/namespaces/<namespace-id>/data/explore/queries?limit=<limit>&cursor=<cursor>&offset=<offset>
The results are returned as a JSON array, with each element containing information about a query:
[ { "timestamp":1407192465183, "statement":"SHOW TABLES", "status":"FINISHED", "query_handle":"319d9438-903f-49b8-9fff-ac71cf5d173d", "has_results":true, "is_active":false }, ... ]
Example
🔗Count of Active Queries
To return the count of active queries, use:
GET /v3/namespaces/<namespace-id>/data/explore/queries/count
The results are returned in the body as a JSON string:
{ "count":6 }
🔗Download Query Results
To download the results of a query, use:
POST /v3/data/explore/queries/<query-handle>/download
The results of the query are returned in CSV format.
Note: this endpoint is not namespaced, as all query-handles are globally unique.
The query results can be downloaded only once. The RESTful API will return a Status Code
409 Conflict
if results for the
query-handle are attempted to be downloaded again.
HTTP Responses
🔗Enabling and Disabling Querying
Querying (or exploring) of datasets and streams can be enabled and disabled using these endpoints.
Exploration of data in CDAP is governed by a combination of enabling the CDAP Explore
Service and then creating datasets and streams that are explorable. The CDAP Explore
Service is enabled by a setting in the CDAP configuration file (
explore.enabled in
cdap-site.xml file).
Datasets and streams—that were created while the Explore Service was not enabled—can, once the service is enabled and CDAP restarted, be enabled for exploration by using these endpoints.
You can also use these endpoints to disable exploration of a specific dataset or stream. The dataset or stream will still be accessible programmatically; it just won't respond to queries or be available for exploration using the CDAP UI.
For datasets:
POST /v3/namespaces/<namespace-id>/data/explore/datasets/<dataset-name>/enable POST /v3/namespaces/<namespace-id>/data/explore/datasets/<dataset-name>/disable
For streams:
POST /v3/namespaces/<namespace-id>/data/explore/streams/<stream-name>/tables/<table-name>/enable POST /v3/namespaces/<namespace-id>/data/explore/streams/<stream-name>/tables/<table-name>/disable
Each of these endpoints returns a query handle that can be used to submit requests tracking the status of the query.
HTTP Responses
If the request was successful, the body will contain a query handle that can be used to identify the query in subsequent requests, such as a status request:
{ "handle":"<query-handle>" }
Example | http://docs.cask.co/cdap/4.1.1/en/reference-manual/http-restful-api/query.html | 2017-05-22T23:25:10 | CC-MAIN-2017-22 | 1495463607242.32 | [] | docs.cask.co |
Allows apps to provide real-time updates to files through the Cached File Updater contract.
Manages files so that they can be updated in real-time by an app that participates in the Cached File Updater contract.
Used to interact with the file picker if your app provides file updates through the Cached File Updater contract.
Provides information about a requested file update so that the app can complete the request.
Use to complete an update asynchronously.
Provides information about a FileUpdateRequested event.
Describes when Windows will request an update to a file.
Indicates whether updates should be applied to the locally cached copy or the remote version of the file.
Describes the status of a file update request.
Indicates when Windows will request a file update if another app reads the locally cached version of the file.
Indicates the status of the file picker UI.
Indicates whether other apps can write to the locally cached version of the file and when Windows will request an update if another app writes to that local file. | https://docs.microsoft.com/en-us/uwp/api/Windows.Storage.Provider | 2017-05-23T00:07:32 | CC-MAIN-2017-22 | 1495463607242.32 | [] | docs.microsoft.com |
Notes about Databases
Please keep in mind that each database contains its own system collections, which need to be set up when a database is created. This will make the creation of a database take a while.
Replication is configured on a per-database level, meaning that any replication logging or applying for a new database must be configured explicitly after a new database has been created.. | https://docs.arangodb.com/3.1/Manual/DataModeling/Databases/Notes.html | 2017-05-22T23:19:26 | CC-MAIN-2017-22 | 1495463607242.32 | [] | docs.arangodb.com |
Creating a database
To store your time series data in the InfluxDb service, you’ll need to create a database.
Click on the Databases tab to view the databases list, then click on the + NEW button. A window will appear as shown below.
Once you have created a database, the names of all of your databases will listed in this tab in the form
{username}_{database} where
username is your Sense Tecnic service username and
database is the name of the database you created. You will only need to fill in the database name as suggested in the New Database window.
To get started, let’s create a database named “demo_database”. Once you clicked Confirm and Create, you should see your new database on the list.
You can now create database users to connect to the database as described next. | http://docs.sensetecnic.com/influxdb/create-database/ | 2017-10-17T07:33:47 | CC-MAIN-2017-43 | 1508187820930.11 | [array(['/assets/images/influxdb_view_database.png',
'influxdb_view_database.png'], dtype=object)
array(['/assets/images/influxdb_new_database.png',
'influxdb_new_database.png'], dtype=object)
array(['/assets/images/influxdb_new_database_added.png',
'influxdb_new_database_added.png'], dtype=object)] | docs.sensetecnic.com |
Contains.
Remarks.
See Also
Backup and Restore Tables (Transact-SQL)
backupfilegroup (Transact-SQL)
backupmediafamily (Transact-SQL)
backupmediaset (Transact-SQL)
backupset (Transact-SQL)
System Tables (Transact-SQL) | https://docs.microsoft.com/en-us/sql/relational-databases/system-tables/backupfile-transact-sql | 2017-10-17T08:59:26 | CC-MAIN-2017-43 | 1508187820930.11 | [array(['../../includes/media/yes.png', 'yes'], dtype=object)
array(['../../includes/media/no.png', 'no'], dtype=object)
array(['../../includes/media/no.png', 'no'], dtype=object)
array(['../../includes/media/no.png', 'no'], dtype=object)] | docs.microsoft.com |
This add-on enables you to export orders in IIF (Intuit Interchange Format) files that can then be imported into QuickBooks. Here you can adjust some parameters of exported IIF files.
For instructions on how to handle IIF files and their contents, please refer to the official QuickBooks documentation and support resources.
Note
When you enable this add-on, the Export to Quickbooks option appears under the gear button in the Orders → View orders section. | http://docs.cs-cart.com/4.6.x/user_guide/addons/quickbooks/index.html | 2017-10-17T07:48:29 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.cs-cart.com |
Note
This feature is available only in Searchanise Pro.
Searchanise’s analytics shows the most searched products and categories, the most and the least productive search suggestions, searches with 0 results, and more. Use this information to tune your search for the best profitability.
Here you can read more about Analytics. | http://docs.cs-cart.com/4.6.x/user_guide/addons/searchanise/analytics.html | 2017-10-17T07:46:38 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.cs-cart.com |
Tutorial Processes
Extracting annotations into a data set
The process loads the Iris data set and extracts its annotations into a new example set. The only annotation will be the Source annotation which is created by the Retrieve operator. | https://docs.rapidminer.com/studio/operators/utility/annotations/annotations_to_data.html | 2017-10-17T07:53:13 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.rapidminer.com |
Note: Most user interface tasks can be performed in Edge Classic or the New Edge experience. For an overview, getting started topics, and release notes specific to the New Edge experience, see the docs.
Comparing Quota, Spike Arrest, and Concurrent Rate Limit Policies
Quota, Spike Arrest, and Concurrent Rate Limit policies — wondering which one to use to best meet your rate limiting needs? See the comparison chart below.
* The current HTTP status code for exceeding the rate limit is 500, but it will soon be changed to 429. Until the change occurs, if you.>"
Help or comments?
- If something's not working: Ask the Apigee Community or see Apigee Support.
- If something's wrong with the docs: Send Docs Feedback
(Incorrect? Unclear? Broken link? Typo?) | http://docs.apigee.com/api-services/content/comparing-quota-spike-arrest-and-concurrent-rate-limit-policies | 2017-10-17T07:38:53 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.apigee.com |
You are viewing documentation for version 3 of the AWS SDK for Ruby. Version 2 documentation can be found here.
Class: Aws::LambdaPreview::Types::GetEventSourceRequest
- Defined in:
- gems/aws-sdk-lambdapreview/lib/aws-sdk-lambdapreview/types.rb
Overview
Note:
When making an API call, you may pass GetEventSourceRequest data as a hash:
{ uuid: "String", # required }
Instance Attribute Summary collapse
- #uuid ⇒ String
The AWS Lambda assigned ID of the event source mapping.
Instance Attribute Details
#uuid ⇒ String
The AWS Lambda assigned ID of the event source mapping. | http://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/LambdaPreview/Types/GetEventSourceRequest.html | 2017-10-17T08:02:23 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.aws.amazon.com |
Use the reboot resource to reboot a node, a necessary step with some installations on certain platforms. This resource is supported for use on the Microsoft Windows, Mac OS X, and Linux platforms.is the resource
nameis the name of the resource block
:actionidentifies the steps the chef-client will take to bring the node into the desired state
delay_minsand
reasonare properties of this resource, with the Ruby type shown. See “Properties” section below for more information about all of the properties that may be used with this resource.
This resource has the following actions:
:cancel
:nothing
:reboot_now
:request_reboot
This resource has the following properties:
delay_mins
Ruby Type: Fixnum
The amount of time (in minutes) to delay a reboot request. timer is available:
:immediate,
:immediately
reason
Ruby Type: String
A string that describes the reboot action.
retries
Ruby Type: Integer
The number of times to catch exceptions and retry the resource. Default value:
0.
retry_delay
Ruby Type: Integer
The retry delay (in seconds). Default value:
2. timer is available:
:immediate,
:immediately.' end
Rename computer, join domain, reboot
The following example shows how to rename a computer, join a domain, and then reboot the computer:
reboot 'Restart Computer' do action :nothing end powershell_script 'Rename and Join Domain' do code <<-EOH ...your rename and domain join logic here... EOH not_if <<-EOH $ComputerSystem = gwmi win32_computersystem ($ComputerSystem.Name -like '#{node['some_attribute_that_has_the_new_name']}') -and $ComputerSystem.partofdomain) EOH notifies :reboot_now, 'reboot[Restart Computer]', :immediately end
where:
not_ifguard prevents the Windows PowerShell script from running when the settings in the
not_ifguard match the desired state
notifiesstatement tells the reboot resource block to run if the powershell_script block was executed during the chef-client run. | http://docs.w3cub.com/chef~12/12-13/resource_reboot/ | 2017-10-17T07:30:11 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.w3cub.com |
Developer’s Overview¶
Contributing¶
Contribute to source code, documentation, examples and report issues:
Creating a Release¶
- Release from the
masterbranch.
- Update the library version in
setup.pyand in
doc/conf.pyusing semantic versioning.
- Run all tests and examples against available hardware.
- Update CONTRIBUTORS.txt with any new contributors.
- Sanity check that documentation has stayed inline with code. For large changes update
doc/history.rst
- Create a temporary virtual environment. Run
python setup.py installand
python setup.py test
- Create and upload the distribution:
python setup.py sdist bdist_wheel upload --sign
- In a new virtual env check that the package can be installed with pip:
pip install python-can
- Create a new tag in the repository.
- Check the release on PyPi and github. | http://python-can.readthedocs.io/en/latest/development.html | 2017-10-17T07:43:19 | CC-MAIN-2017-43 | 1508187820930.11 | [] | python-can.readthedocs.io |
The Amazon Resource Name (ARN) of the ACM Certificate. The ARN must have the following form: arn:aws:acm:region:123456789012:certificate/12345678-1234-1234-1234-123456789012 For more information about ARNs, see Amazon Resource Names (ARNs) and AWS Service Namespaces.
.NET Framework:
Supported in: 4.5, 4.0, 3.5
Portable Class Library:
Supported in: Windows Store Apps
Supported in: Windows Phone 8.1
Supported in: Xamarin Android
Supported in: Xamarin iOS (Unified)
Supported in: Xamarin.Forms | http://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/CertificateManager/MCertificateManagerICertificateManagerDescribeCertificateString.html | 2017-10-17T07:34:57 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.aws.amazon.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Container for the parameters to the DescribeWorkflowExecution operation.
Returns information about the specified workflow execution including its type and
some statistics.
This operation is eventually consistent. The results are best effort and may not exactly
reflect recent updates and changes. DescribeWorkflowExecution | http://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/SimpleWorkflow/TSimpleWorkflowDescribeWorkflowExecutionRequest.html | 2017-10-17T07:32:50 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.aws.amazon.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.