content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Recipient Notifications
The ability to send a notification to the recipient when a transport rule triggers. This is a request that we have received from a lot of customers, and I’m happy to let you know that it is now rolling out in the form of an action in EOP transport rules. Sweet!
Recipient Notifications
Let’s say you have a transport rule that quarantines all inbound messages with an executable attachment. In the past there was no way to automatically notify your users that a message destined to them had been redirected to the quarantine because of your transport rule. Now, with Recipient Notifications, your transport rules can send a notification to the recipient when they trigger.
Configuration
When creating a transport rule, you will notice a new action called “Notify the recipient with a message…”
As an example, if we want to quarantine messages destined to our users that contain executable content, and want to notify them when this happens, our transport rule could look like this.
In this rule contoso.com is our own domain. Here’s what the notification text looks like in the above rule.
A company policy blocked an inbound message to you - Executable content not permitted.<br><br>
Date: %%MessageDate%% UTC<br>
From: %%From%%<br>
To: %%To%%<br>
CC: %%Cc%%<br>
Subject: %%Subject%%
And here is what the notification looks like that the recipient will receive when this rule triggers.
You’ll notice that I was able to insert information from the original message into the notification using variables. Let’s look next at what customization is possible.
Notification customization
Variables can be added into the notification text to include information from the original message. The following variables are supported for recipient notifications.
Summary
If you don’t see the recipient notification action yet in your transport rules don’t panic. This feature only just lit up in my test tenant this past week and will still be rolling out. Enjoy this new capability!
Resources
TechNet documentation has not been updated yet, but once it has I will post links here. | https://docs.microsoft.com/de-de/archive/blogs/eopfieldnotes/recipient-notifications?WT.mc_id=M365-MVP-5003086 | 2021-02-24T22:01:00 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['https://msdnshared.blob.core.windows.net/media/TNBlogsFS/prod.evol.blogs.technet.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/01/02/59/Notify%20the%20recipient%20with%20a%20message.jpg',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/TNBlogsFS/prod.evol.blogs.technet.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/01/02/59/6683.Complete%20rule.jpg',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/TNBlogsFS/prod.evol.blogs.technet.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/01/02/59/Received%20notification.jpg',
None], dtype=object) ] | docs.microsoft.com |
Insert special characters This is a new feature in EDITOR version 4.1. To input special characters or symbols, click the button in the Formatting group on the General tab. A menu will appear with a list of symbols and characters. On the right, you can choose which category to select your symbol from. Once you find the symbol you want to insert, click it and it will appear in the editor. Alternatively, if you know the Unicode code for a character, type it in the blank to get it directly: Sometimes more than one symbol will appear. This is done to allow entering the codepoint value either in either hexadecimal or decimal. In the above example, the code could either mean U+0198 as a hexadecimal Unicode number, or Æ as a decimal Unicode number, so both are shown: Previous: LaTeX supportNext: Supported browsers | https://docs.wiris.com/en/mathtype/mathtype_web/unicode | 2021-02-24T20:47:06 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.wiris.com |
struct.node.vel.local
Syntax
Get the velocity of a structure node expressed in a local system.
m - matrix of velocity with all degrees-of-freedom if the argument i is not assigned, or
velocity at the ith degree-of-freedom if the argument i is assigned
sn - pointer to the structure node
i - degree-of-freedom, i ∈ {1, 2, … , 6}. The default is 1.
struct.node.vel.global | http://docs.itascacg.com/flac3d700/common/sel/doc/manual/sel_manual/nodes/fish/sel.node_intrinsics/fish_sel.node.vel.local.html | 2021-02-24T20:40:34 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.itascacg.com |
Using the History Page to Monitor Queries¶
The History
page allows you to view and drill into the details of all queries executed in the last 14 days. The page displays a historical listing of queries, including queries
executed from SnowSQL or other SQL clients. The default information displayed for each query includes:
Current status of queries: waiting in queue, running, succeeded, failed.
SQL text of your query.
Query ID.
Information about the warehouse used to execute the query.
Query start and end time, as well as duration.
Information about the query, including number of bytes scanned and number of rows returned.
In this Topic:
Overview of Features¶
You can perform the following tasks in the History page:
Use the auto-refresh checkbox in the upper right to enable/disable auto-refresh for the session. If selected, the page is refreshed every 10 seconds. You can also click the Refresh icon to refresh the display at any time.
Use the Show/Hide Filters toggle to open/close a panel where you can specify one or more filters that control the queries displayed on the page. Filters you specify are active for the current session.
Use the Include client-generated statements checkbox to show or hide SQL statements run by web interface sessions outside of SQL worksheets. For example, whenever a user navigates to the Warehouses
, Snowflake executes a SHOW WAREHOUSES statement in the background. Clear the Include client-generated statements checkbox to hide this “noise” in the list of displayed queries.
Use the Include queries executed by user tasks checkbox to show or hide SQL statements executed or stored procedures called by user tasks.
Scroll through the list of displayed queries. The list includes (up to) 100 of the first queries that match your filters, or the latest 100 queries (when no filters are applied). At the bottom of the list, if more queries are available, you can continue searching, which adds (up to) 100 of the next matching queries to the list.
Click any column header to sort the page by the column or add/remove columns in the display.
Click the text of a query (or select the query and click View SQL) to view the full SQL for the query.
Select a query that has not yet completed and click Abort to abort the query.
Click the ID for a query to view the details for the query, including the result of the query and the Query Profile.
Note
The History page displays queries executed in the last 14 days, starting with the most recent ones. You can use the End Time filter to display queries based on a specified date; however, if you specify a date earlier than the last 14 days, no results are returned.
Viewing Query Details and Results¶
Snowflake persists the result of a query for a period of time (currently 24 hours), after which the result is purged. This limit is not adjustable.
To view the details and result for a particular query, click the Query ID in the History page. The Query Detail page appears (see below), where you can view query execution details, as well as the query result (if still available).
You can also use the Export Result button to export the result of the query (if still available) to a file.
Note
You can view results only for queries you have executed. If you have privileges to view queries executed by another user, the Query Detail page displays the details for the query, but, for data privacy reasons, the page does not display the actual query result.
Exporting Query Results¶
On any page in the interface where you can view the result of a query (e.g. Worksheets, Query Detail), if the query result is still available, you can export the result to a file.
When you click the Export Result button for a query, you are prompted to specify the file name and format. Snowflake supports the following file formats for query export:
Comma-separated values (CSV)
Tab-separated values (TSV)
Note
You can export results only for queries for which you can view the results (i.e. queries you’ve executed). If you didn’t execute a query or the query result is no longer available, the Export Result button is not displayed for the query.
The web interface only supports exporting results up to 100 MB in size. If a query result exceeds this limit, you are prompted whether to proceed with the export.
The export prompts may differ depending on your browser. For example, in Safari, you are prompted only for an export format (CSV or TSV). After the export completes, you are prompted to download the exported result to a new window, in which you can use the Save Page As… browser option to save the result to a file.
Viewing Query Profile¶
In addition to query details and results, Snowflake provides the Query Profile for analyzing query statistics and details, including the individual execution components that comprise the query. For more information, see Analyzing Queries Using Query Profile. | https://docs.snowflake.com/en/user-guide/ui-history.html | 2021-02-24T21:15:47 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.snowflake.com |
Smart Tag
The RadInput Smart Tag allows easy access to frequently needed tasks. You can display the Smart Tag by right clicking on a RadInput control in the design window, and choosing Show Smart Tag from its context menu.
The RadNumericTextBox Smart Tag contains the same Ajax Resources, Skin, and Learning Center sections as the other RadInput controls. In addition, the RadNumericTextBox Smart Tag lets you do the following :
RadNumericTextBox Tasks
NumericType lets you set the Type property by selecting a type from the drop-down list. The Type property governs the basic formatting of numeric values, according to the settings of the current Culture.Numeric Type can be set to "Number", "Currency", or "Percent".
Value lets you set the Value property to a numeric value.
Minimum Value lets you set the MinValue property to a numeric value.
Maximum Value lets you set the MaxValue property to a numeric value. | https://docs.telerik.com/devtools/aspnet-ajax/controls/numerictextbox/design-time | 2021-02-24T21:33:13 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.telerik.com |
Using Security Groups with Virtual Machines (Instances)¶
- date
2015-11-30
Security Groups Overview¶ is associated with a default security group. The default security group allows both ingress and egress traffic. Security rules can be added to the default security group to change the traffic behavior.
Creating Security Groups and Adding Rules¶.
Select the default-security-group and click Edit Rules in the Actions column.
The Edit Security Group Rules window is displayed. Any rules already associated with the security group are listed.
Click Add Rule to add a new rule.
Table 1: Add Rule Fields
Click Create Security Group to create additional security groups.
The Create Security Group window is displayed.
Each new security group has a unique 32-bit security group ID and an ACL is associated with the configured rules.
When an instance is launched, there is an opportunity to associate a security group.
In the Security Groups list, select the security group name to associate with the instance.
You can verify that security groups are attached by viewing the
SgListReqand
IntfReqassociated with the
agent.xml. | https://docs.tungsten.io/en/latest/tungsten-fabric-installation-and-upgrade-guide/creating-security-groups.html | 2021-02-24T20:04:38 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.tungsten.io |
amazee.io Documentation
NOTICE
This is the legacy documentation of the amazee.io infrastructure. If you are using Lagoon please head over to the Lagoon Documentation
Welcome to the amazee.io documentation. This site will give you insights into amazee.io and helps you getting your Drupal site online on our infrastructure as fast and easy as possible.
We suggest the Get your Drupal site running on amazee.io as a nice reading to start.
Stuck 😩?
Join us in our Slack channel and we will help you right away: slack.amazee.io
More about amazee.io 🎉
Learn more about amazee.io on our website
Find our stories at stories.amazee.io
Changelog 📃
We are releasing new features every week to the amazee.io universe, find them at: changelog.amazee.io | https://docs.amazee.io/ | 2021-02-24T20:36:47 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['Docs_amazeeio.png', None], dtype=object)] | docs.amazee.io |
Welcome! What can you expect from Power Automate? Here are a few examples of what you can do:
- Automate business processes
- Send automatic reminders for past due tasks
- Move business data between systems on a schedule
- Connect to almost 300 data sources or any publicly available API
- You can even automate tasks on your local computer like computing data in Excel.
Just think about time saved once you automate repetitive manual tasks simply by recording mouse clicks, keystrokes and copy paste steps from your desktop! Power Automate is all about automation.
Who is Power Automate for?
What skills do you need to have? Anyone from a basic business user to an IT professional can create automated processes using Power Automate's no-code/low-code platform.
What industries can benefit from Power Automate? Check out how some companies implemented Microsoft Power Platform solutions using Power Automate in:
Find examples from your industry
The first step in creating an automation is to sign up, or, if you already have an account with Power Automate, sign in.
What are the different types of flows?
Visit the flow types article to learn more about the different types of flows that you can create to automate your tasks.
On the start page for Power Automate, you can explore a diverse set of templates and learn about the key features for Power Automate. You can get a quick sense of what's possible and how Power Automate could help your business and your life. 380 data sources that Power Automate supports to create your own flows from scratch.
When you create a cloud flow from scratch, you control the entire workflow. Here are a few ideas to get your started:
- Flows with many steps.
- Run tasks on a schedule.
- Create an approval flow.
- Watch a cloud flow in action.
- Publish a template.
- Create flows from a Microsoft Teams template.
Peek at the code
You don't need to be a developer to create flows; however, Power Automate does provide a Peek code feature that allows anyone to take a closer look at the code that's generated for all actions and triggers in a cloud flow. Peeking at the code could give you a clearer understanding of the data that's being used by triggers and actions. Follow these steps to peek at the code that's generated for your flows from within the Power Automate.
Find your flows easily..
Use the mobile app
Download the Power Automate mobile app for Android, iOS, or Windows Phone. With this app, you can monitor flow activity, manage your flows and create flows from templates.
Get help planning your Power Automate projects
If you're ready to start your Power Automate project, visit the guidance and planning article to get up and running quickly.
We're here to help
We're excited to see what you do with Power Automate, and we want to ensure you have a great experience. Be sure to check out our guided learning tutorials and join our community to ask questions and share your ideas. Contact support if you run into any issues.
Note
Can you tell us about your documentation language preferences? Take a short survey.
The survey will take about seven minutes. No personal data is collected (privacy statement). | https://docs.microsoft.com/en-us/power-automate/getting-started?WT.mc_id=m365-11261-aycabas | 2021-02-24T21:37:52 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['media/getting-started/build-a-flow.png', 'Building a cloud flow'],
dtype=object)
array(['media/getting-started/filter-search-box.png',
'Filter or search flows'], dtype=object)
array(['media/getting-started/notification-center.png',
'Notification center'], dtype=object) ] | docs.microsoft.com |
$ oadmadm.
Remove the
self-provisioners
cluster role
from the group.
$ oadm policy remove-cluster-role-from-group self-provisioner system:authenticated to prevent
automatic updates
to the role. Automatic updates reset the cluster roles to the default state.
To update the role from the command line:
Run the following command:
$ oc edit clusterrole self-provisioner
In the displayed role, set the
openshift.io/reconcile-protect parameter
value to
true, as shown in the following example:
apiVersion: authorization.openshift.io/v1 kind: ClusterRole metadata: annotations: authorization.openshift.io/system-only: "true" openshift.io/description: A user that can request project. openshift.io/reconcile-protect: "true" ...
To update the role by using automation, use the following command:
$ oc patch clusterrole self-provisioner -p '{ "metadata": { "annotations": { "openshift.io/reconcile-protect": "true" } } }':
# systemctl restart atomic-openshift-master new-project
--node-selector=""), the project will not have an adminstrator.
# systemctl restart atomic-openshift-master | https://docs.openshift.com/container-platform/3.4/admin_guide/managing_projects.html | 2021-02-24T21:15:23 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.openshift.com |
Internal and external Impala tables
When creating a new Kudu table using Impala, you can create the table as an internal table or an external table.
- Internal
- An internal table (created by
CREATE TABLE) is managed by Impala, and can be dropped by Impala. When you create a new table using Impala, it is generally a internal table. When such a table is created in Impala, the corresponding Kudu table will be named
impala::database_name.table_name. The prefix is always
impala::, and the database name and table name follow, separated by a dot.
- External
- An external table (created by
CREATE EXTERNAL TABLE) is not managed by Impala, and dropping such a table does not drop the table from its source location (here, Kudu). Instead, it only removes the mapping between Impala and Kudu. This is the mode used in the syntax provided by Kudu for mapping an existing table to Impala. | https://docs.cloudera.com/runtime/7.2.7/kudu-integration/topics/kudu-internal-and-external-impala-tables.html | 2021-02-24T20:37:22 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.cloudera.com |
Getting Started, User Management, Release Notes
Getting Started
Basic concepts and instructions to get up and running with Hyperview.
User Management
Updating account settings and administering users.
Release Notes
See new features, device support and bug fixes added with each release.
API Documentation
Explore the Hyperview API.
Installation Guide
Detailed instructions on pre-installation, installation and configuration.
Upgrade Guide
Everything you need to know to make the upgrade process fast and easy.
User Guide
Comprehensive guide on all of RAMP’s features and functionality.
Get quick answers to the most frequently asked questions.
Videos
Concise video demonstrations of features and functionality.
Integrations
Integrations with 3rd party applications. | https://docs.hyperviewhq.com/ | 2021-02-24T20:58:11 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.hyperviewhq.com |
Using the Google Maps API in your application
Using is part of Google Play Services. A Xamarin.Android app must meet some mandatory prerequisites before it is possible to use the Google Maps Android API.
Google Maps API prerequisites
Several steps need to be taken before you can use the Maps API, including:
- Obtain a Maps API key
- Install the Google Play Services SDK
- Install the Xamarin.GooglePlayServices.Maps package from NuGet
- Specify the required permissions
- Optionally, Create an emulator with the Google APIs
Obtain a Google Maps API Key
The first step is to get a Google Maps API key (note that you cannot reuse an API key from the legacy Google Maps v1 API). For information about how to obtain and use the API key with Xamarin.Android, see Obtaining A Google Maps API Key.
Install the Google Play Services SDK
Google Play Services is a technology from Google that allows Android applications to take advantage of various Google features such as Google+, In-App Billing, and Maps. These features are accessible on Android devices as background services, which are contained in the Google Play Services APK.
Android applications interact with Google Play Services through the Google Play Services client library. This library contains the interfaces and classes for the individual services such as Maps. The following diagram shows the relationship between an Android application and Google Play Services:
The Android Maps API is provided as a part of Google Play Services. Before a Xamarin.Android application can use the Maps API, the Google Play Services SDK must be installed using will not work on the device.. Click Browse and enter Xamarin Google Play Services Maps in the search field. Select Xamarin.GooglePlayServices.Maps and click Install. (If this package had been installed previously, click Update.):
Notice that the following dependency packages are also installed:
- Xamarin.GooglePlayServices.Base
- Xamarin.GooglePlayServices.Basement
- Xamarin.GooglePlayServices.Tasks
Specify the required permissions
Apps must identify the hardware and permission requirements in order to use the Google Maps API. Some permissions are automatically granted by the Google Play Services SDK, and it is not necessary for a developer to explicitly add them to AndroidManfest.XML:
Access to the Network State – The Maps API must be able to check if it can download the map tiles.
Internet Access – Internet access is necessary to download the map tiles and communicate with the Google Play Servers for API access.
The following permissions and features must be specified in the AndroidManifest.XML for the Google Maps Android API:
OpenGL ES v2 – The application must declare the requirement for OpenGL ES v2. optional library to use.
Access to the Google Web-based Services – The application needs permissions to access Google's web services that back the Android Maps API.
Permissions for Google Play Services Notifications – The application must be granted permission to receive remote notifications from Google Play Services.
Access to Location Providers – These are optional permissions. They will allow the
GoogleMapclass to display the location of the device on the map.
In addition, Android 9 has removed the Apache HTTP client library from the bootclasspath, and so it isn't available to applications that target API 28 or higher. The following line must be added to the
application node of your AndroidManifest.xml file to continue using the Apache HTTP client in applications that target API 28 or higher:
<application ...> ... <uses-library android: </application>
Note
Very old versions of the Google Play SDK required an app to request the
WRITE_EXTERNAL_STORAGE permission. This requirement is no longer necessary with the recent Xamarin bindings for Google Play Services.="23" android: <!-- Google Maps for Android v2 requires OpenGL ES v2 --> <uses-feature android: <!-- Necessary for apps that target Android 9.0 or higher --> <uses-library android: <!--" /> <!-- Necessary for apps that target Android 9.0 or higher --> <uses-library android: </application> </manifest>
In addition to requesting the permissions AndroidManifest.XML, an app must also perform runtime permission checks for the
ACCESS_COARSE_LOCATION and the
ACCESS_FINE_LOCATION permissions. See the Xamarin.Android Permissions guide for more information about performing run-time permission checks.
Create an Emulator with Google APIs
In the event that a physical Android device with Google Play services is not installed, it is possible to create an emulator image for development. For more information see the Device Manager.
The GoogleMap Class
Once the prerequisites are satisfied, it is time to start developing the application and use the Android Maps API. The GoogleMap class is the main API that a Xamarin.Android application will use to display and interact with a Google Maps for Android.object. The
MapFragmentrequires Android API level 12 or higher. Older versions of Android can use the SupportMapFragment. This guide will focus on using the
MapFragmentclass.
MapView - The MapView is a specialized View subclass, which can act as a host for a
GoogleMapobject. Users of this class must forward all of the Activity lifecycle methods to the
MapViewclass.
Each of these containers exposes a
Map property that returns an
instance of
GoogleMap. Preference should be given to the
MapFragment
class as it is a simpler API that reduces the amount boilerplate code
that a developer must manually implement.
Adding a MapFragment to an Activity
The following screenshot is an example of a simple
MapFragment:
Similar to other Fragment classes, there are two ways to add a
MapFragment to an Activity:
Declaratively - The
MapFragmentcan be added via the XML layout file for the Activity. The following XML snippet shows an example of how to use the
fragmentelement:
<?xml version="1.0" encoding="utf-8"?> <fragment xmlns:
Programmatically - The
MapFragmentcan be programmatically instantiated using the
MapFragment.NewInstancemethod and then added to an Activity. This snippet shows the simplest way to instantiate a
MapFragmentobject and add to an Activity:
var mapFrag = MapFragment.NewInstance(); activity.FragmentManager.BeginTransaction() .Add(Resource.Id.map_container, mapFrag, "map_fragment") .Commit();
It is possible to configure the
MapFragmentobject by passing a
GoogleMapOptionsobject to
NewInstance. This is discussed in the section GoogleMap properties that appears later on in this guide.
The
MapFragment.GetMapAsync method is used to initialize the
GoogleMap that is hosted by the fragment and obtain a reference to the map object that is hosted by the
MapFragment. This method takes an object that implements the
IOnMapReadyCallback interface.
This interface has a single method,
IMapReadyCallback.OnMapReady(MapFragment map) that will be invoked when it is possible for the app to interact with the
GoogleMap object. The following code snippet shows how an Android Activity can initialize a
MapFragment and implement the
IOnMapReadyCallback interface:
public class MapWithMarkersActivity : AppCompatActivity, IOnMapReadyCallback { protected override void OnCreate(Bundle bundle) { base.OnCreate(bundle); SetContentView(Resource.Layout.MapLayout); var mapFragment = (MapFragment) FragmentManager.FindFragmentById(Resource.Id.map); mapFragment.GetMapAsync(this); // remainder of code omitted } public void OnMapReady(GoogleMap map) { // Do something with the map, i.e. add markers, move to a specific location, etc. } }
Map types
There are five different types of maps available from the Google Maps API:
Normal - This is the default map type. It shows roads and important natural features along with some artificial three of the different types of maps, from left-to-right (normal, hybrid, terrain):
The
GoogleMap.MapType property is used to set or change which type of
map is displayed. The following code snippet shows how to display a
satellite map.
public void OnMapReady(GoogleMap map) { map.MapType = GoogleMap.MapTypeHybrid; } is by manipulating properties on the
UiSettings
of the map object. The next code sample shows how to configure a
GoogleMap to display the zoom controls and a compass:
public void OnMapReady(GoogleMap map) { map.UiSettings.ZoomControlsEnabled = true; map.UiSettings.CompassEnabled = true; }
Interacting with the GoogleMap
The Android Maps API provides AP zoom level:
MapFragment mapFrag = (MapFragment) FragmentManager.FindFragmentById(Resource.Id.my_mapfragment_container); mapFrag.GetMapAsync(this); ... public void OnMapReady(GoogleMap map) {:
public void OnMapReady(GoogleMap map) {.MoveCamera(cameraUpdate); }
In the previous code snippet, a specific location on the map is
represented by the
LatLng
class. The zoom level is set to 18, which is an arbitrary measure of zoom used by Google Maps. The bearing is the compass
measurement clockwise from North. The Tilt property controls the
viewing angle and specifies an angle of 25 degrees from the
vertical. The following screenshot shows the
GoogleMap after executing
the preceding code:
Drawing on the Map
The Android Maps API provides API's for drawing the following items on a map:
Markers - These are special icons that are used to identify a single location on a map.
Overlays - This is the Marker class uses.
public void OnMapReady(GoogleMap map) { MarkerOptions markerOpt1 = new MarkerOptions(); markerOpt1.SetPosition(new LatLng(50.379444, 2.773611)); markerOpt1.SetTitle("Vimy Ridge"); map.AddMarker(markerOpt1); }
The title of the marker will be displayed in an info window when the user taps on the marker. The following screenshot shows what this marker looks like: list introduces some of these methods:
DefaultMarker(float colour)– Use the default Google Maps marker, but change the colour.
FromAsset(string assetName)– Use a custom icon from the specified file in the Assets folder.
FromBitmap(Bitmap image)– Use the specified bitmap as the icon.
FromFile(string fileName)– Create the custom icon from the file at the specified path.
FromResource(int resourceId)– Create a custom icon from the specified resource.
The following code snippet shows an example of creating a cyan coloured default marker:
public void OnMapReady(GoogleMap map) { MarkerOptions markerOpt1 = new MarkerOptions(); markerOpt1.SetPosition(new LatLng(50.379444, 2.773611)); markerOpt1.SetTitle("Vimy Ridge"); var bmDescriptor = BitmapDescriptorFactory.DefaultMarker (BitmapDescriptorFactory.HueCyan); markerOpt1.InvokeIcon(bmDescriptor); map.AddMarker(markerOpt its contents customized, while the image on the right has its window and contents customized with rounded corners:
GroundOverlays
Unlike markers, which identify a specific location on a map, a GroundOverlay is an image that is used to identify a collection of locations or an area on the map.
Adding a GroundOverlay
Adding a ground overlay to a map is = googleMap create a geometric shape.
Circle - This will draw a circle on the map.
Polygon - This is a closed shape for marking areas on. googleMap.AddPolyline); googleMap.AddCircle (circleOptions);
Polygons
Polygons are. The polygon will be closed off by the
AddPolygon method googleMap.AddPolygon(rectOptions);
Responding to user
The
MarkerClicked event is raised when the user taps on a marker. This event accepts a
GoogleMap.MarkerClickEventArgs object as a parameter..
Marker– This property is a reference to the marker that raised the
MarkerClickevent.
This code snippet shows an example of a
MarkerClick that will change
the camera position to a new location on the map:
void MapOnMarkerClick(object sender, GoogleMap.MarkerClickEventArgs markerClickEventArgs) { markerClickEventArgs.Handled = true; var marker = markerClickEventArgs.Marker; if (marker.Id.Equals(gotMauiMarkerId)) { LatLng InMaui = new LatLng(20.72110, -156.44776); // Move the camera to look at Maui. PositionPolarBearGroundOverlay(InMaui); googleMap.AnimateCamera(CameraUpdateFactory.NewLatLngZoom(InMaui, 13)); gotMauiMarkerId = null; polarBearMarker.Remove(); polarBearMarker = null; } else { Toast.MakeText(this, $"You clicked on Marker ID drag the marker, the user must first long-click on the marker and then their finger must remain on the map. When the user's finger is dragged around on the screen, the marker will move. When the user's finger lifts off the screen, the marker will remain in place.
The following list describes the various events that will be raised for a draggable marker:
GoogleMap.MarkerDragStart(object sender, GoogleMap.MarkerDragStartEventArgs e)– This event is raised when the user first drags the marker.
GoogleMap.MarkerDrag(object sender, GoogleMap.MarkerDragEventArgs e)– This event is raised as the marker is being dragged.
GoogleMap.MarkerDragEnd(object sender, GoogleMap.MarkerDragEndEventArgs e)– This event is raised when the user is finished dragging the:
public void OnMapReady(GoogleMap map) { map.InfoWindowClick += MapOnInfoWindowClick; } private void MapOnInfoWindowClick (object sender, GoogleMap.InfoWindowClickEventArgs e) { Marker myMarker = e.Marker; //. | https://docs.microsoft.com/en-us/xamarin/android/platform/maps-and-location/maps/maps-api | 2021-02-24T20:06:48 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['maps-api-images/play-services-diagram.png',
'Diagram illustrating the Google Play Store updating the Google Play Services APK'],
dtype=object)
array(['maps-api-images/image01.png',
'Google Play Services appears under Extras in the Android SDK Manager'],
dtype=object)
array(['maps-api-images/image02.png',
'Solution Explorer showing Manage NuGet Packages context menu item under References'],
dtype=object)
array(['maps-api-images/marker-infowindows.png',
'Example marker windows for Melbourne, including icon and population. The right window has rounded corners.'],
dtype=object) ] | docs.microsoft.com |
ckb-sdk-js
ckb-sdk-js is an SDK implemented in JavaScript published in NPM Registry, produced by the Nervos Foundation.
ckb-sdk-js provides APIs for developers to send requests to the CKB blockchain and can be used both in-browser and Node.js because actually it is implemented in Typescript, which is a superset of JavaScript and compiled into ES6.
Please note that ckb-sdk-js is still under development and NOT production ready. You should get familiar with CKB transaction structure and RPC before using it.
ckb-sdk-js won’t generate private keys. If you want to generate private keys, you can use
openssl
openssl rand 32 -hex
ckb-sdk-js includes three modules:
- RPC module: RPC module can send RPC requests to the CKB blockchain, the list of requests can be found in the CKB Project and the interfaces could be found in
DefaultRPCclass in this module.
- Utils module: The Utils module provides useful methods for other modules.
- Types module: The Types module id used to provide the type definition of CKB Components according to the CKB Project. CKB Project complies to the snake case convention, which is listed in types/CKB_RPC in the RPC module. TypeScript complies to the PascalCase convention, which is listed in this module.
All three modules are integrated into the core module called
@nervosnetwork/ckb-sdk-core
PrerequisitesPrerequisites
Before you start using this SDK, you'll first need to install yarn on your system. There are a growing number of different ways to install Yarn. Please refer to installation documentation.
InstallationInstallation
If you want to use
@nervosnetwork/ckb-sdk-core, you need to import it in your project and instantiate it with a node object. For now, the node object only contains one field named
url, the URI of the blockchain node you are going to communicate with.
- Import in the project
$ yarn add @nervosnetwork/ckb-sdk-core
Instantiate it with a node object
For now, the node object only contains one field named
url, the URI of the blockchain node you are going to communicate with.
const CKB = require('@nervosnetwork/ckb-sdk-core').default const nodeUrl = '' const ckb = new CKB(nodeUrl)
After that you can use the
ckb object to generate addresses, send requests, etc.
Development key pointsDevelopment key points
Code ExamplesCode Examples
- Send Simple Transaction
- Send All Balance
- Send Transaction with multiple private key
- Deposit to and withdraw from Nervos DAO
Persistent ConnectionPersistent Connection
If ckb-sdk-js is running in Node.js, please add
httpAgent or
httpsAgent to enable the persistent connection.
// HTTP Agent const http = require('http') const httpAgent = new http.Agent({ keepAlive: true }) ckb.rpc.setNode({ httpAgent }) // HTTPS Agent const https = require('https') const httpsAgent = new https.Agent({ keepAlive: true }) ckb.rpc.setNode({ httpsAgent }) | http://docs-old.nervos.org/tooling/ckb-sdk-js | 2021-02-24T20:07:49 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs-old.nervos.org |
Adding indexes for improving asset search
This topic provides additional suggestions for improving performance based on internal benchmarks and field engagement.
On the Asset Console, depending on the filters you commonly use, you might need to fine tune the performance of Smart IT. To improve performance, you can apply indexes to the fields listed in this topic, which are found to under perform during asset searches.
Best Practice
Disclaimer
Effectiveness of indexes depends on the distribution of data and the use of query criteria. To ensure better performance, carefully test the indexes before you implement them in your production environment.
You can create separate indexes on the fields listed in this topic. You may preferably use BMC Atrium Class Manager for adding indexes on the fields pertaining to BMC.CORE:BMC_BaseElement. The AST:Attributes fields requires BMC Remedy Developer Studio for creating indexes. Creating indexes by using BMC Atrium Class Manager is a handy way, but it is not a mandatory method. You may still create indexes using BMC Remedy Developer Studio, if it is available. Your preference depends on the availability and accessibility to these tools.
Indexes for searching asset by using Keyword
Indexes for searching assets by using Scanned code
You can create composite indexes on the following fields:
Indexes for searching assets by using Type/Subtype
Indexes for searching assets by using Product Category/Name
Depending on the common search pattern followed at your site, you can create separate or composite indexes on the following fields:
Indexes for searching assets by using other available filters
(Change request tickets only) You can create separate indexes for searching assets by using keywords in the default text field:
Indexes for searching assets by using keywords | https://docs.bmc.com/docs/smartit1808/adding-indexes-for-improving-asset-search-818731115.html | 2021-02-24T21:11:30 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.bmc.com |
Enforcing TLS version 1.2 for Hue
CDP Data Hub cluster components and services such as the Cloudera Manager web UI, the Hue web UI, and the Impala web UI communicate with each other using TLS 1.2 as the default TLS protocol, and TLS 1.1 or 1.0 if a client requests it. You can enforce these services to only use TLS 1.2 by specifying the SSL protocol in Cloudera Manager.
- Go to SSL Protocol field: and add the following line in the
SSLProtocol -all +TLSv1.2
- Click Save Changes.
- Restart the Hue service.
- Verify that TLS version 1.2 is used for encryption and all the ciphers used are “strong” by using a security scanner such as Nmap.
- Open a CLI console on a machine in your cluster.
- Run the following command:
nmap -sV --script +ssl-enum-ciphers -p 8889 [***HOSTNAME***] -fReplace [***HOSTNAME***] with the actual name of the host.The following is a sample output. It shows that only TLS 1.2 is available for the handshake and that all the ciphers are “strong”:
Starting Nmap 7.80 ( ) at 2020-30-10 11:16 PDT Nmap scan report for hostname.example.com (a.b.c.d) Host is up (-1800s latency). PORT STATE SERVICE VERSION 8889/tcp open ssl/http Apache httpd 2.4.6 ((CentOS) OpenSSL/1.0.2k-fips) | ssl-enum-ciphers: | SSLv3: No supported ciphers found | TLSv1 |_3DES_EDE_CBC_SHA - strong | TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA - strong | TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 - | | TLS_RSA_WITH_AES_256_CBC_SHA - strong | TLS_RSA_WITH_AES_256_CBC_SHA256 - strong | TLS_RSA_WITH_AES_256_GCM_SHA384 - strong | compressors: | NULL |_ least strength: strong Service detection performed. Please report any incorrect results at . Nmap done: 1 IP address (1 host up) scanned in 22.43 seconds You have new mail in /var/spool/mail/root
- Set the
SSL_CIPHER_LISTproperty for the Hue Server in Cloudera Manager.
- Go to Hue Server Advanced Configuration Snippet (Safety Valve) for hue_safety_valve_server.ini field: and specify the following in the
ssl_cipher_list=DEFAULT:!aNULL:!eNULL:!LOW:!EXPORT:!SSLv2:!SSLv3:!TLSv1The
SSL_CIPHER_LISTproperty is a list of one or more cipher suite strings separated by colons. This restricts the use of the default cipher suite before establishing an encrypted SSL connection.
- Click Save Changes.
- Restart the Hue service. | https://docs.cloudera.com/cdp-private-cloud-base/7.1.5/securing-hue/topics/hue-enforcing-tls-1.2.html | 2021-02-24T20:11:24 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.cloudera.com |
Payment status with the Portal API
After creation, each request has a payment status, you can view the data of a request which contains the payment status via:
You will receive back an object that looks like this:
To get the payment status of a Request you can use the requestData object to check if the balance is greater than or equal to the expectedAmount.
If the balance >= expectedAmount - this means the request is paid. If the balance > 0 but < expectedAmount - this means the request is partially paid. If the balance == 0 - this means the request is unpaid.
You can use the following snippet to see if the request has been paid.
If the Request is unpaid, it might be useful to use the metadata field called ‘state’ - the state will return the current payment status of the request, either ‘created’, ‘accepted’, ‘pending’ or ‘canceled’. | https://docs.request.network/docs/guides/3-Portal-API/2-payment-status/ | 2021-02-24T20:31:15 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.request.network |
Learn how to create or change the schedule for data refresh in a materialized view.
To keep the data in a materialized view consistent and relevant, we recommend that you periodically refresh it with data from the parent table or tables. ThoughtSpot makes it easy, by letting you schedule regular refreshes at daily, weekly, or monthly intervals.
To schedule materialization of a view, follow these steps:
To find your view, click Data in the top menu.
Under Data Objects at the top of the page, choose Views.
Click the name of your view.
Click Joins.
- Under Materialization, in the Update schedule section, either update an existing schedule, or create a new schedule:
- To update an existing schedule, click Daily, Weekly, or Monthly.
- To create a schedule, click None.". | https://docs.thoughtspot.com/6.2/admin/loading/schedule-materialization.html | 2021-02-24T20:12:14 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.thoughtspot.com |
15.7.
logging — Logging facility for Python¶
Source code: Lib/logging/__init__.py
New in version 2.3.
This. If you are unfamiliar with logging, the best way to get to grips with it is to see the tutorials (see the links on the right).. ‘delegation.)¶().
15.7.2. Logging Levels¶. String Formatting Operations
for more information on string formatting.
The useful mapping keys in a
LogRecord are given in the section on
LogRecord attributes.
- class
logging.
Formatter(fmt=None, datefmt=None)¶
Returns a new instance of the
Formatterclass.)¶
Formattersubclass=None)¶.
Changed in.
Changed in version 2.5: funcName was added.
Changed in version 2.6: processName was added.
15.7.8. LoggerAdapter Objects¶
LoggerAdapter instances are used to conveniently pass contextual
information into logging calls. For a usage example, see the section on
adding contextual information to your logging output.
New in version 2.6.
- class
logging.
LoggerAdapter(logger, extra)¶
Returns an instance of
LoggerAdapterinitialized with an underlying
Loggerinstance and a dict-like object.
process(msg, kwargs)¶ the following
methods of
Logger:
debug(),
info(),
warning(),
error(),
exception(),
critical(),
log() and
isEnabledFor().
These methods have the same signatures as their counterparts in
Logger,
so you can use the two types of instances interchangeably for these calls.
Changed in version 2.7: The
isEnabledFor() method was added to
LoggerAdapter.
This method delegates to the underlying logger.
15.7.9. Thread Safety¶.
15.7.10. Module-Level Functions¶
In addition to the classes described above, there are a number of module- level functions.
logging.
getLogger([name])¶(. | https://docs.python.org/2/library/logging.html | 2018-07-16T04:55:07 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.python.org |
Bindable
Object.
Bindable Binding Context Object.
Property
Binding Context
Definition
Gets or sets object that contains the properties that will be targeted by the bound properties that belong to this BindableObject.
public object BindingContext { get; set; }
member this.BindingContext : obj with get, set
Property Value
An Object that contains the properties that will be targeted by the bound properties that belong to this BindableObject. This is a bindable property.
Remarks
The following example shows how to apply a BindingContext and a Binding to a Label (inherits from BindableObject):
var label = new Label (); label.SetBinding (Label.TextProperty, "Name"); label.BindingContext = new {Name = "John Doe", Company = "Xamarin"}; Debug.WriteLine (label.Text); //prints "John Doe" | https://docs.microsoft.com/en-us/dotnet/api/xamarin.forms.bindableobject.bindingcontext?view=xamarin-forms | 2018-07-16T05:11:38 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.microsoft.com |
RDS Desktops and Applications in Horizon 7 guide.
For information about preparing Linux virtual machines for remote desktop deployment, see the Setting Up Horizon 7 for Linux Desktops guide. | https://docs.vmware.com/en/VMware-Horizon-7/7.3/horizon-virtual-desktops/GUID-B829612C-3296-4525-9682-0CE29A72F9E3.html | 2018-07-16T05:23:10 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.vmware.com |
< Main Index Core/General >.
To avoid most of the manual configuration create a JBoss Server
that has "JBoss 4.2 Runtime" as its runtime name and a "JBoss
Application Server 4.2" as server name, then the current example
projects should work. In upcoming release we will make sure
these settings can be automatically adjusted. | http://docs.jboss.org/tools/whatsnew/examples/examples-news-1.0.0.Beta1.html | 2018-07-16T04:30:45 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.jboss.org |
When you use the VMware Blast display protocol or the PCoIP display protocol, you can extend a remote desktop to multiple monitors. If you have a Mac with Retina Display, you can see the remote desktop in full resolution.
Using Multiple Monitors.
The remote desktop must have View Agent 6.2 or later, or Horizon Agent 7.0 or later, installed. For best performance, the virtual machine should have at least 2 GB of RAM and 2 vCPUs. This feature might require good network conditions, such as a bandwidth of 1000 Mbps with low network latency and a low package loss rate.
Using Full-Screen Mode With Multiple Monitors
When a remote desktop window is open, you can use themenu item or the expander arrows in the upper-right corner of the desktop window to extend the remote desktop across multiple monitors. You can select the menu item to make the remote desktop fill only one monitor. With this option, the monitors do not have to be in the same mode. For example, if you are using a laptop connected to an external monitor, the external monitor can be in portrait mode or landscape mode.
You can select a full-screen option from the Settings dialog box after you connect to a server and before you open a remote desktop. Click the Settings button (gear icon) in the upper right corner of the desktop and application selection window, select the remote desktop, and select a full-screen option from the Full Screen drop-down menu.
You can use the selective multiple-monitor feature to display a remote desktop window on a subset of your monitors. For more information, see Select Specific Monitors in a Multiple-Monitor Setup.
Using Remote Desktops With Split View
With Split View, which is supported in El Capitan (10.11) and later operating systems, you can fill your Mac screen with two applications without manually moving and resizing windows. You can use Split View with remote desktops in full-screen mode (Full Screen or Use Single Display in Full Screen option).
Using a High-Resolution Mac With Retina Display
When you use the VMware Blast display protocol or the PCoIP display protocol, Horizon Client also supports very high resolutions for those client systems with Retina Display. After you connect to a remote desktop, you can select the menu item. This menu item appears only if the client system supports Retina Display.
If you use Full Resolution, the icons on the remote desktop are smaller but the display is sharper. | https://docs.vmware.com/en/VMware-Horizon-Client-for-Mac/4.6/com.vmware.horizon.mac-client-46-install/GUID-B8ADFF2A-EEEC-4F4B-B082-1C4152C956A6.html | 2018-07-16T05:23:17 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.vmware.com |
Create a knowledge article from an incident When you are ready to close an incident, you can create a knowledge article so the next time the issue comes up the resolution is easy to find. Before you beginRole required: itil About this task When an incident is closed either by the caller or automatically, a draft knowledge article is created. Procedure Open a resolved incident that you want to close. Ensure that the Knowledge check box is selected and that a resolution is entered in the Additional comments (Customer visible) field. Click Close incident. A new draft knowledge article is created. The content in the fields listed in the following table is copied from the Incident form to the Knowledge form. Field on Incident form Field on Knowledge form Short description Short description Additional comments Text Number Source The Knowledge related list on the Incident form is populated with the new draft knowledge article. The draft article does not appear in the knowledge base (KB) for users until it is reviewed and published. If the knowledge submission workflow is enabled, the comments in the incident Short description and Additional comments fields become a knowledge submission instead of an article. The KB Submissions related list on the Incident form is populated with the new knowledge submission. For more information, see Knowledge workflows . What to do nextTo see the draft articles, navigate to Knowledge > My Knowledge Articles and then open the draft article by its KB number in the Knowledge form. Related TasksAssign and update incidentsUse a dependency view to locate affected CIsPromote an incident | https://docs.servicenow.com/bundle/istanbul-it-service-management/page/product/incident-management/task/create-knowledge-incident.html | 2018-07-16T04:32:47 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.servicenow.com |
You need to run data collection for the consolidated cluster before you can provision blueprints into that cluster.
About this task
Procedure
- Log in to the vRealize Automation Rainpole portal.
- Open a Web browser and go to.
- Log in using the following credentials.
- Navigate to .
- In the Compute Resource column, hover the mouse pointer over the cluster NYC01, and click Data Collection.
- Click on the Request now buttons in each field on the page.
Wait a few seconds for the data collection process to complete.
- Click Refresh, and verify that the Status for both Inventory and Network and Security Inventory shows Succeeded. | https://docs.vmware.com/en/VMware-Validated-Design/4.0/com.vmware.vvd.sddc-robo-deploy.doc/GUID-F725F3C1-B2DF-4A52-95B2-20741D73FABC.html | 2018-07-16T05:19:53 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.vmware.com |
A network pool is a group of undifferentiated networks used to create vApp networks and internal organization virtual datacenter networks. You can configure a virtual data center template to automatically connect to a network pool upon instantiation or to connect to no network pool.
Procedure
- Choose how the virtual data center connects to a network pool.
- Click Next. | https://docs.vmware.com/en/vCloud-Director/8.10/com.vmware.vcloud.admin.doc_810/GUID-38D20289-B3BF-4378-B3D4-7AC08763CCCA.html | 2018-07-16T05:19:50 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.vmware.com |
CoresLink
Cores are essentially other programs and games that run through RetroArch. RetroArch requires cores to run any content.
Tip
Many game console may have multiple emulator cores, the question of which one is the best may come up. Emulators can be designed to be more accurate at the cost of a performance hit, check out the Emulation General Wiki for a good look at what will suit your needs and hardware.
Installing cores through RetroArch interfaceLink
- Navigate to Online Updater
- Navigate to Select Core Updater
- Select the core you want to download
Installing cores through package manager (Ubuntu PPA only)Link
Note
Installing RetroArch through the Ubuntu PPA will disable the "Core Updater" option in RetroArch's interface, therefore core installation needs to happen through the Ubuntu package manager.
- Open a terminal
- Start typing sudo apt-get install libretro-
- Press tab a few times until all available possibilities show, press space to expand the list.
- Now type the full name of the core you want to install Example: sudo apt-get install libretro-nestopia
- Press enter and follow the process to install | http://docs.libretro.com/guides/download-cores/ | 2018-07-16T04:35:55 | CC-MAIN-2018-30 | 1531676589179.32 | [array(['../../image/windows/core_updater.gif', 'Core Updater'],
dtype=object) ] | docs.libretro.com |
You enable the Per App Tunnel component in the VMware Tunnel settings to set up per app tunneling functionality for Android devices. Per app tunneling allows your internal and managed public applications to access your corporate resources on an app-by-app basis.
About this task
The VPN can automatically connect when a specified app is launched.
Procedure
- In the AirWatch admin console, navigate to .
- The first time you configure VMware Tunnel, select Configuration and follow the configuration wizard. Otherwise, select Override and select Enable . Then click Configure.
- In the Configuration Type page, enable Per-App Tunnel (Linux Only). Click Next.
Leave Basic as the deployment model.
- In the Details page, for the Per-App Tunneling Configuration enter the VMware VMware VMware VMware Tunnel server.
- Review the summary of your configuration and click Save.
You are directed to the system settings configuration page.
- Select the General tab and download the Tunnel virtual appliance.
You can use VMware Unified Access Gateway to deploy the Tunnel server.
What to do next
Install the VMware Tunnel server. For instructions, see the VMware Tunnel Guide on the AirWatch Resources Web site. | https://docs.vmware.com/en/VMware-Identity-Manager/2.9.1/com.vmware.aw-vidm-ws1integration-911/GUID-D2B7791F-1DC3-465F-9918-846FD588ADB0.html | 2018-07-16T05:08:19 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.vmware.com |
FAQ: MongoDB Storage¶
On this page
This document addresses common questions regarding MongoDB’s storage system.
Storage Engine Fundamentals¶
What is a storage engine?¶..
If the write operation includes a write concern of
j: true, WiredTiger forces a sync of the WiredTiger journal files.
Because MongoDB uses a journal file size limit of 100 MB, WiredTiger creates a new journal file approximately every 100 MB of data. When WiredTiger creates a new journal file, WiredTiger syncs the previous journal file.
MMAPv1 Storage Engine¶ MMAPv1.
How frequently does MMAPv1 write to disk?¶
In the default configuration for the MMAPv1 storage engine, MongoDB writes to the data files on disk every 60 seconds and writes to the journal files roughly every 100 milliseconds.
To change the interval for writing to the data files, use the
storage.syncPeriodSecs setting. For the journal files, see
storage.journal.commitIntervalMs setting.
These values represent the maximum amount of time between the completion of a write operation and when MongoDB writes to the data files or to the journal files. In many cases MongoDB and the operating system flush data to disk more frequently, so that the above values represents a theoretical maximum.¶..
For best performance, the majority of your active set should fit in RAM.
What are page faults?¶,,.
See Page Faults for more information.
Can I manually pad documents to prevent moves during updates?¶
Changed in version 3.0.0.
With the MMAPv1 storage engine, an update can cause a document to move on disk if the document grows in size. To minimize document movements, MongoDB uses padding.
You should not have to pad manually because by default, MongoDB uses Power of 2 Sized Allocations to add padding automatically. The Power of 2 Sized Allocations ensures that MongoDB allocates document space in sizes that are powers of 2, which helps ensure that MongoDB can efficiently reuse free space created by document deletion or relocation as well as reduce the occurrences of reallocations in many cases.. | https://docs.mongodb.com/manual/faq/storage/ | 2017-04-23T05:22:16 | CC-MAIN-2017-17 | 1492917118477.15 | [] | docs.mongodb.com |
Learning Objectives
Welcome to the intermediate guide for spatial data collection with OpenStreetMap. In the previous unit you learned how to draw points, lines and shapes in JOSM, how to open your GPS waypoints and tracks in JOSM and how to download, edit and upload your changes on OSM. In this module, we will describe relations, JOSM editing tools and editing techniques in greater detail.
Note
While this module is not extremely advanced, it is a step higher than the previous unit. If you don’t feel like you fully understand the lessons leading up to this, you may wish to practise a little bit more before continuing.
There are a few ways you can access more editing tools in JOSM. We will look more at the default tools, as well as additional tools available through plugins.. Some of the most useful functions are described here:
This allows you to divide a line into two separate lines. This is useful if you want to add different attributes to different parts of a road, such as a bridge. To use this function, select a point in the middle of the line that you want to split, go to Tools ‣ Split Way and your line should be split in two., go to Tools ‣ Combine Way.
Note that you if are combining roads that have different directions, you might get this warning:
If the roads are connected and go in the same direction, click Reverse and Combine.
This will change the direction of the line. If the line incorrectly represents a road or river that is one-way, you may want to change its direction. Unless someone has intentionally created a way to be one way you do not usually have to worry about altering the direction because ways in OSM default to represent both directions.
If your line has too many nodes in it and you’d like to make it simpler, this will remove some of the points from a line.
If you are trying to make a circular shape, draw the circle as best you can and then select three nodes and the function. It will help arrange your points in a circle.
This function will align a series of points into a straight line. With long lines it is best to select sections of the line to straighten. Be careful as this does have the tendency to shift the line a little.
This function is very useful for drawing regular shapes such as buildings. After you draw an area, this function will reshape it to have square corners. This feature is most useful for other regularly shaped features, such as tennis courts or landuse areas (Using the Building Plugin, which is explained below, might be easier for buildings).
This plugin is by far one of the most useful tools for editing (digitising). Install it as with any other plugin. It will appear as an icon on the left hand toolbar. The functionality of this tool is explained here:
The Building tool allows you to create shapes with 90 degree corners with just three clicks. First, trace the edge of the building and then drag out the line to make it a polygon.
You can also create more complicated buildings by using the merge option. Create your building outline, select all of the polygons (press SHIFT to highlight them all) and then press SHIFT + J to merge the objects.
Furthermore, you can also change the default settings (size of building and default tags) by going to Data ‣ Set building size.
This is useful if you are drawing many buildings of a known dimension (such as five by six metres). If you are mapping infrastructure which requires tags other than building=yes, you can set the desired default tags by going to Advanced....
The plugin utilsplugin2 has several features that are also useful for editing.
After you install this plugin, a new menu will appear called More Tools.
The following tools are some of the most useful:
This tool is helpful for adding missing nodes in intersections of selected ways. It is good practice that roads and rivers should always have common nodes where they intersect.
This tool simplifies adding a source tag. It remembers the source that was specified last and adds it as remembered source tag to your objects. You can insert the source with just one click. (2) just draw the object again (3) select the old and new object (4) press Replace Geometry to transfer all the information over.
Utilsplugin2 also provides a new selection menu that provides more tools:
These tools are some of the most useful:
This tool allows you to deselect nodes, which makes it useful for tagging the objects selected. This tool is necessary if you have mapped several polygon objects with similar attributes and would like to tag the objects without tagging the nodes. To do so, select all of the objects - polygons, ways and relations. Then unselect the nodes and tag appropriately.
In the first unit we learned that there are three types of objects that can be drawn in OSM - points (nodes), lines (ways) and polygons. Lines contain numerous points, and the line itself carries the attributes that define what it represents. Polygons are the same as lines, except that the the line must finish where it begins in order to form a shape.
In fact, there is one other type of object in OSM, and these are called relations. In the same way that a line consists of other points, a relation contains a group of other objects, be they points, lines or polygons. If you are looking to obtain advanced editing skills, then understanding and knowing how to properly edit relations is important.
For example, imagine that you want to map a building that has courtyards in the centre. You would need to draw a polygon around the outside of the building, and you would need other polygons around the courtyards to indicate that they are not part of the building. This is an example of a relation. The relation would contain several polygons - and the attributes of the building would be attached to the relation, not the polyg. In this section we will go over Multipolygons and Routes.
The multipolygon above contains a polygon for the outer limits of the building and two more to mark the inner courtyards. To create a relation from these three polygons we need to:
The polygons should automatically be created as a multi-polygon.
This opens the relation editor. Notice that in the lower-left corner is a list of the members of the relation. One has been automatically defined with the role of “outer” (the outer polygon), and the other carries the role of “inner.”
At the top are a list of the tags applied to this relation. Right now only one tag exists, type=multipolygon. This tag indicates what type of relation the object is.
The data behind the relation in our example is visible on OSM: You can see this multipolygon on OSM by going to. It will appear on OSM like this:
The river below is another example of a multipolygon. Effectively it is the same as the building example, but with a greater number of members and covering a much larger area. It can be viewed on OSM here:.
This river contains ten ways that are connected like a long polygon.
Relations are also very useful for creating, labeling and editing large linestrings; for example, bus routes, hiking trails, bicycle paths, etc. These differ from multipolygons because they are relations with members, as supposed to complex areas. A linestring could simply be one line with multiple members. Additional features, such as bus stops represented by separate nodes can also be tagged as relation members.
To create a linestring relation:
Relations are difficult to understand and do not have to be used often, but they are necessary to know about. As you get more developed with your OSM skills and want to create more complex buildings, rivers and routes, relations will be useful. | http://docs.inasafe.org/en/training/old-training/intermediate/osm/301-advanced-editing.html | 2017-04-23T05:26:27 | CC-MAIN-2017-17 | 1492917118477.15 | [] | docs.inasafe.org |
The integration framework includes an integration layer with an API. This allows you to:
- plug in an eCommerce system and pull product data into AEM
- build AEM components for commerce capabilities independent of the specific eCommerce engine
The integration framework includes an integration layer with an API. This allows you to:.
By submitting your feedback, you accept the Adobe Terms of Use.
Thank you for submitting your feedback.
Any questions? | https://docs.adobe.com/docs/en/aem/6-2/develop/ecommerce.html | 2017-04-23T05:30:32 | CC-MAIN-2017-17 | 1492917118477.15 | [] | docs.adobe.com |
Delegating Permissions to Administer IAM Users, Groups, and Credentials
If you are signed in with AWS account (root) credentials, you have no restrictions on administering IAM users or groups or on managing their credentials. However, IAM users must explicitly be given permissions to administer users or credentials for themselves or for other IAM users. This topic describes IAM policies that let IAM users manage other users and user credentials.
Topics
Overview
In general, the permissions that are required in order to administer users, groups, and
credentials correspond to the API actions for the task. For example, in order to create users,
a user must have the
iam:CreateUser permission (API command:
CreateUser). To allow a user to
create other IAM users, you could attach a policy like the following one to that user:
Copy
{ "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "iam:CreateUser", "Resource": "*" } }
In a policy, the value of the
Resource element depends on the action and what
resources the action can affect. In the preceding example, the policy allows a user to create
any user (
* is a wildcard that matches all strings). In contrast, a policy that
allows users to change only their own access keys (API actions
CreateAccessKey and
UpdateAccessKey) typically
has a
Resource element where the ARN includes a variable that resolves to the
current user's name, as in the following example (replace
ACCOUNT-ID-WITHOUT-HYPHENS with your AWS account ID):
Copy
{ "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": [ "iam:CreateAccessKey", "iam:UpdateAccessKey" ], "Resource": "arn:aws:iam::
accountid:user/${aws:username}" } }
In the previous example,
${aws:username} is a variable that resolves to the
user name of the current user. For more information about policy variables, see IAM Policy Variables Overview.
Using a wildcard character (
*) in the action name often makes it easier to
grant permissions for all the actions related to a specific task. For example, to allow users
to perform any IAM action, you can use
iam:* for the action. To allow users to
perform any action related just to access keys, you can use
iam:*AccessKey* in
the
Action element of a policy statement. This gives the user permission to
perform the
CreateAccessKey,
DeleteAccessKey,
GetAccessKeyLastUsed,
ListAccessKeys, and
UpdateAccessKey
actions. (If an action is added to IAM in the future that has "AccessKey" in the name, using
iam:*AccessKey* for the
Action element will also give the user
permission to that new action.) The following example shows a policy that allows users to
perform all actions pertaining to their own access keys (replace
ACCOUNT-ID-WITHOUT-HYPHENS with your AWS account ID):
Copy
{ "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "iam:*AccessKey*", "Resource": "arn:aws:iam::ACCOUNT-ID-WITHOUT-HYPHENS:user/${aws:username}" } }
Some tasks, such as deleting a group, involve multiple actions: You must first remove users from the group, then detach or delete the group's policies, and then actually delete the group. If you want a user to be able to delete a group, you must be sure to give the user permissions to perform all of the related actions.
Permissions for Working in the AWS Management Console
The preceding examples show policies that allow a user to perform the actions with the AWS CLI or the AWS SDKs. If users want to use the AWS Management Console to administer users, groups, and permissions, they need additional permissions. As users work with the console, the console issues requests to IAM to list users and groups, get the policies associated with a user or group, get AWS account information, and so on.
For example, if user Bob wants to use the console to change his own access keys, he goes
to the IAM console and chooses Users. This action causes the console to
make a
ListUsers request. If
Bob doesn't have permission for the
iam:ListUsers action, the console is denied
access when it tries to list users. As a result, Bob can't get to his own name and to his own
access keys, even if he has permissions for the
CreateAccessKey and
UpdateAccessKey
actions.
If you want to give users permissions to administer users, groups, and credentials with the AWS Management Console, you need to include permissions for the actions that the console performs. For some examples of policies that you can use to grant a user for these permissions, see Example Policies for Administering IAM Resources. | http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_delegate-permissions.html | 2017-04-23T05:24:48 | CC-MAIN-2017-17 | 1492917118477.15 | [] | docs.aws.amazon.com |
Flask-FlatPages¶
Flask-FlatPages provides a collection of pages to your Flask application. Pages are built from “flat” text files as opposed to a relational database.
- Works on Python 2.6, 2.7 and 3.3+
- BSD licensed
- Latest documentation on Read the Docs
- Source, issues and pull requests on Github
- Releases on PyPI
Installation¶
Install the extension with pip:
$ pip install Flask-FlatPages
or you can get the source code from github.
Configuration¶
To get started all you need to do is to instantiate a
FlatPages object
after configuring the application:
from flask import Flask from flask.
FLATPAGES_ROOT
- Path to the directory where to look for page files. If relative, interpreted as relative to the application root, next to the
staticand
templatesdirectories. Defaults to
pages.
FLATPAGES_EXTENSION
Filename extension for pages. Files in the
FLATPAGES_ROOTdirectory without this suffix are ignored. Defaults to
.html.
Changed in version 0.6: Support multiple file extensions via sequences, e.g.:
['.htm', '.html']or via comma-separated strings:
.htm,.html.
FLATPAGES_ENCODING
- Encoding of the pages files. Defaults to
utf8.
FLATPAGES_HTML_RENDERER
Callable or import string for a callable that takes at least the unicode body of a page, and return its HTML rendering as a unicode string. Defaults to
pygmented_markdown().
Changed in version 0.5: Support for passing the
FlatPagesinstance as second argument.
Changed in version 0.6: Support for passing the
Pageinstance as third argument.
Renderer functions need to have at least one argument, the unicode body. The use of either
FlatPagesas second argument or
FlatPagesand
Pageas second respective third argument is optional, and allows for more advanced renderers.
FLATPAGES_MARKDOWN_EXTENSIONS
New in version 0.4.
List of Markdown extensions to use with default HTML renderer. Defaults to
['codehilite'].
For passing additional arguments to Markdown extension, e.g. in case of using footnotes extension, use next syntax:
['footnotes(UNIQUE_IDS=True)'].
To disable line numbers in CodeHilite extension, which are enabled by default, use:
['codehilite(linenums=False)']
FLATPAGES_AUTO_RELOAD
- Wether to reload pages at each request. See Laziness and caching for more details. The default is to reload in
DEBUGmode only.
Please note that multiple FlatPages instances can be configured by using a name for the FlatPages instance at initializaton time:
flatpages = FlatPages(name="blog")
To configure this instance, you must use modified configuration keys, by adding
the uppercase name to the configuration variable names:
FLATPAGES_BLOG_*
How it works¶
When first needed (see Laziness and caching for more about this),
the extension loads all pages from the filesystem: a
Page object is
created for all files in
FLATPAGES_ROOT whose name ends with
FLATPAGES_EXTENSION.
Each of these objects is associated to a path:
the slash-separated (whatever the OS) name of the file it was loaded from,
relative to the pages root, and excluding the extension. For example, for
an app in
C:\myapp with the default configuration, the path for the
C:\myapp\pages\lorem\ipsum.html is
lorem/ipsum.
Each file is made of a YAML mapping of metadata, a blank line, and the page body:
title: Hello published: 2010-12-22 Hello, *World*! Lorem ipsum dolor sit amet, …
The body format defaults to Markdown with Pygments baked in if available,
but depends on the
FLATPAGES_HTML_RENDERER configuration value.
To use Pygments, you need to include the style declarations separately.
You can get them with
pygments_style_defs():
@app.route('/pygments.css') def pygments_css(): return pygments_style_defs('tango'), 200, {'Content-Type': 'text/css'}
and in templates:
<link rel="stylesheet" href="{{ url_for('pygments_css') }}">
Using custom Markdown extensions¶ = []
Using custom HTML renderers¶
As pointed above, by default Flask-FlatPages expects that flatpage body
contains Markdown markup, so uses
markdown.markdown function to render
its content. But due to
FLATPAGES_HTML_RENDERER setting you can specify
different approach for rendering flatpage body.
The most common necessity of using custom HTML renderer is modifyings default Markdown approach (e.g. by pre-rendering Markdown flatpages with Jinja), or using different markup for rendering flatpage body (e.g. ReStructuredText). Examples below introduce how to use custom renderers for those needs.
Pre-rendering Markdown flatpages with Jinja¶
from flask import Flask, render_template_string from flask_flatpages import FlatPages from flask_flatpages.utils import pygmented_markdown def my_renderer(text): prerendered_body = render_template_string(text) return pygmented_markdown(prerendered_body) app = Flask(__name__) app.config['FLATPAGES_HTML_RENDERER'] = my_renderer pages = FlatPages(app)
ReStructuredText flatpages¶
from docuitls.core import publish_parts from flask import Flask from flask_flatpages import FlatPages def rst_renderer(text): parts = publish_parts(source=text, writer_name='html') return parts['fragment'] app = Flask(__name__) app.config['FLATPAGES_HTML_RENDERER'] = rst_renderer pages = FlatPages(app)
Laziness and caching¶
FlatPages does not hit the filesystem until needed but when it does,
it reads all pages from the disk at once.
Then, pages are not loaded again unless you explicitly ask for it with
FlatPages.reload(), or on new requests depending on the configuration.
(See
FLATPAGES_AUTO_RELOAD.)
This design was decided with Frozen-Flask in mind but should work even if you don’t use it: you already restart your production server on code changes, you just have to do it on page content change too. This can make sense if the pages are deployed alongside the code in version control.
If you have many pages and loading takes a long time, you can force it at initialization time so that it’s done by the time the first request is served:
pages = FlatPages(app) pages.get('foo') # Force loading now. foo.html may not even exist.
Loading everything every time may seem wasteful, but the impact is mitigated
by caching: if a file’s modification time hasn’t changed, it is not read again
and the previous
Page object is re-used.
Likewise, the YAML and Markdown parsing is both lazy and cached: not done until needed, and not done again if the file did not change.
Changelog¶
Version 0.6¶
Released on 2015-02-09
- Python 3 support.
- Allow multiple file extensions for FlatPages.
- The renderer function now optionally takes a third argument, namely the
Pageinstance.
- It is now possible to instantiate multiple instances of
FlatPageswith different configurations. This is done by specifying an additional parameter
nameto the initializer and adding the same name in uppercase to the respective Flask configuration settings.
Version 0.5¶
Released on 2013-04-02
- Change behavior of passing
FLATPAGES_MARKDOWN_EXTENSIONSto renderer function, now the
FlatPagesinstance is optionally passed as second argument. This allows more robust renderer functions.
Version 0.4¶
Released on 2013-04-01
- Add
FLATPAGES_MARKDOWN_EXTENSIONSconfig to setup list of Markdown extensions to use with default HTML renderer.
- Fix a bug with non-ASCII filenames.
Version 0.3¶
Released on 2012-07-03
- Add
FlatPages.init_app()
- Do not use namespace packages anymore: rename the package from
flaskext.flatpagesto
flask_flatpages
- Add configuration files for testing with tox and Travis.
Version 0.2¶
Released on 2011-06-02
Bugfix and cosmetic release. Tests are now installed alongside the code. | http://flask-flatpages.readthedocs.io/en/latest/ | 2017-04-23T05:19:38 | CC-MAIN-2017-17 | 1492917118477.15 | [] | flask-flatpages.readthedocs.io |
The AEM generic solution provides methods of managing the commerce information held within the repository (as opposed to using an external ecommerce engine). This includes:
Administering (generic).
Navigate to the Products console, via Commerce.
Using the Products console navigate to the required location.
Use the Import Products icon to open the wizard..
Select Next to import the products, a log of the actions taken will be shown.
Note
The products will be imported to, or relative to, the current location.
Note.
Note):
- Create Product
- Create Product Variation
The wizard will open. Use the Basic and Product Tabs to enter the product attributes for the new product or product variant.
Note
Title and SKU are the minimum required to create a product or variant.
Select Create to save the information..
Using the Products console (via Commerce) navigate to your product information.
- Product Page
- Edit Product Page.
Note
Everything related to multiple assets is done with the Touch-optimized UI.
Navigate to the Products console, via Commerce.
Using the Products console, navigate to the required product.
Note.
Note
The assets you can select are from Assets.
Tap/click Done icon..
Navigate to your product page.
Edit the product component.
Type the Image Category you chose (cat1 for example).
Tap/click Done. The page refreshes and the correct asset should be displayed.
Note..
Navigate to the page where you want to add the component.
Drag and drop the component in the page.
You can either:
- click the component and then click Edit icon
- make a slow double click
Click the fullscreen icon.
Click the Launch Map icon.
Click one of the shape icons.
Modify and move the shape as required.
Click the shape.
Clicking the browse icon opens the Asset Picker.
Note.
Note
To generate a Catalog:).
Note.
Note.
-
By submitting your feedback, you accept the Adobe Terms of Use.
Thank you for submitting your feedback.
Any questions? | https://docs.adobe.com/docs/en/aem/6-2/administer/ecommerce/generic.html | 2017-04-23T05:30:51 | CC-MAIN-2017-17 | 1492917118477.15 | [array(['/content/docs/en/aem/6-2/administer/ecommerce/generic/_jcr_content/contentbody/image.img.png/1400684343928.png',
'file'], dtype=object) ] | docs.adobe.com |
SolOS Event Reference
This section describes SolOS syslog messages related to the following router events.
Unless otherwise stated, these event events should apply to both Solace VMRs and appliances.
- system-wide events
- Message VPN events
- local publisher, subscriber, and client events
Click the link below to access the list of SolOS syslog events: | http://docs.solace.com/SolOS-Event-Reference/SolOS-Event-Reference.htm | 2017-04-23T05:21:47 | CC-MAIN-2017-17 | 1492917118477.15 | [] | docs.solace.com |
When working on a diagram, Umbrello UML Modeller will try to guide you by applying some simple rules as to which elements are valid in the different types of diagrams, as well as the relationships that can exist between them. If you are an UML expert you will probably not even notice it, but this will help UML novices create standard-conformant diagrams.
Once you have created your diagrams it is time to start editing them. Here you should notice the (for beginners subtle) difference between editing your diagram, and editing the model. As you already know, Diagrams are views of your model. For example, if you create a class by editing a Class Diagram, you are really editing both, your Diagram and your model. If you change the color or other display options of a Class in your Class Diagram, you are only editing the Diagram, but nothing is changed in your model.
One of the first things you will do when editing a new diagram is to insert elements into them (Classes, Actors, Use Cases, etc.) There is basically two ways of doing this:
Dragging existing elements in your model from the Tree View
Creating new elements in your model and adding them to your diagram at the same time, by using one of the edit Tools in the Work Toolbar
To insert elements that already exist in your model, just drag them from the Tree View and drop them where you want them to be in your diagram. You can always move elements around in your Diagram using the Select Tool
The second way of adding elements to your diagram is by using the Work Toolbar's edit tools (note that this will also add the elements to your model).
The Work Toolbar was by default located on the top of the window. The tools available on this toolbar (the buttons you see on it) change depending on the type of diagram you are currently working on. The button for the currently selected tool is activated in the toolbar. You can switch to the select tool by pressing the Esc key.
When you have selected an edit tool from the Work Toolbar (for example, the tool to insert classes) the mouse pointer changes to a cross, and you can insert the elements in your model by single clicking in your diagram. Note that elements in UML must have a Unique Name. So that if you have a class in one diagram whose name is “ClassA” and then you use the insert Class tool to insert a class into another diagram you cannot name this new class “ClassA” as well. If these two are supposed to be two different elements, you have to give them a unique name. If you are trying to add the same element to your diagram, then the Insert Class is not the right tool for that. You should drag and drop the class from the Tree View instead.
You can delete any element by selecting the option from its context menu.
Again, there is a big difference between removing an object from a diagram, and deleting an object from your model: If you delete an object from within a diagram, you are only removing the object from that particular diagram: the element will still be part of your model and if there are other diagrams using the same element they will not suffer any change. If, on the other hand, you delete the element from the Tree View, you are actually deleting the element from your model. Since the element no longer exist in your model, it will be automatically removed from all the diagrams it appears in.
You can edit most of the UML elements in your model and diagrams by opening its Properties dialog and selecting the appropriate options. To edit the properties of an object, select from its context menu ( mouse button click). Each element has a dialog consisting of several pages where you can configure the options corresponding to that element. For some elements, like actors you can only set a couple of options, like the object name and documentation, while for other elements, like classes, you can edit its attributes and operations, select what you want to be shown in the diagram (whole operation signature or just operation names, etc) and even the colors you want to use for the line and fill of the class' representation on the diagram.
For UML elements you can also open the properties dialog by double clicking on it if you are using the selection tool (arrow).
Note that you can also select the properties option from the context menu of the elements in the Tree View. This allows you to also edit the properties for the diagrams, like setting whether the grid should be shown or not.
Even though editing the properties of all objects was already covered in the previous section, classes deserve a special section because they are a bit more complicated and have more options than most of the other UML elements.
In the properties dialog for a class you can set everything, from the color it uses to the operations and attributes it has.
The General Settings page of the properties dialog is self-explanatory. Here you can change the class' name, visibility, documentation, etc. This page is always available.
In the Attributes Settings page you can add, edit, or delete attributes (variables) of the class. You can move attributes up and down the list by pressing the arrow button on the side. This page is always available.
Similar to the Attribute Settings Page, in the Operation Settings Page you can add, edit, or remove operations for your class. When adding or editing an operation, you enter the basic data in the Operation Properties dialog. If you want to add parameters to your operation you need to click the button, which will show the Parameter Properties dialog. This page is always available
This page allows you to add class templates which are unspecified classes or datatypes. In Java 1.5 these will be called Generics.
The Class Associations page shows all the associations of this class in the current diagram. Double clicking on an association shows its properties, and depending on the type of association you may modify some parameters here such as setting multiplicity and Role name. If the association does not allow such options to be modified, the Association Properties dialog is read-only and you can only modify the documentation associated with this association.
This page is only available if you open the Class Properties from within a diagram. If you select the class properties from the context menu in the Tree View this page is not available.
In the Display Options page, you can set what is to be shown in the diagram. A class can be shown as only one rectangle with the class name in it (useful if you have many classes in your diagram, or are for the moment not interested in the details of each class) or as complete as showing packages, stereotypes, and attributes and operations with full signature and visibility
Depending on the amount of information you want to see you can select the corresponding options in this page. The changes you make here are only display options for the diagram. This means that “hiding” a class' operations only makes them not to be shown in the diagram, but the operation are still there as part of your model. This option is only available if you select the class properties from within a Diagram. If you open the class properties from the Tree View this page is missing since such Display Options do not make sense in that case
Associations relate two UML objects to each other. Normally associations are defined between two classes, but some types of associations can also exists between use cases and actors.
To create an association select the appropriate tool from the Work Toolbar (generic Association, Generalization, Aggregation, etc.) and single click on the first element participating in the association and then single click on the second item participating. Note that those are two clicks, one on each on the objects participating in the association, it is not a drag from one object to the other.
If you try to use an association in a way against the UML specification Umbrello UML Modeller will refuse to create the association and you will get an error message. This would be the case if, for example, a Generalization exists from class A to class B and then you try to create another Generalization from Class B to class A
option from this context menu. You can also select the option and, depending on the association type edit attributes such as roles and multiplicity.clicking on an association will show a context menu with the actions you can apply on it. If you need to delete an association simply select the
Associations are drawn, by default, as a straight line connecting the two objects in the diagram.
You can add anchor points to bend an association byclicking some where along the association line. This will insert an anchor point (displayed as a blue point when the association line is selected) which you can move around to give shape to the association
If you need to remove an anchor point,click on it again to remove it
Note that the only way to edit the properties of an association is through the context menu. If you try toclick on it as with other UML objects, this will only insert an anchor point.
Notes, Lines Of Text and Boxes are elements that can be present in any type of diagram and have no real semantic value, but are very helpful to add extra comments or explanations that can make your diagram easier to understand.
To add a Note or a Line Of Text, select the corresponding tool from the Work Toolbar and single click on the diagram where you want to put your comment. You can edit the text by opening the element through its context menu or in the case of notes byclicking on them as well.
Anchors are used to link a text note and another UML Element together. For example, you normally use a text note to explain or make some comment about a class or a particular association, in which case you can use the anchor to make it clear that the note “belongs” to that particular element.
To add an anchor between a note and another UML element, use the anchor tool from the work toolbar. You first need to click on the note and then click on the UML element you want the note to be linked to. | https://docs.kde.org/stable4/en/kdesdk/umbrello/edit-diagram.html | 2017-04-23T05:37:07 | CC-MAIN-2017-17 | 1492917118477.15 | [array(['/stable4/common/top-kde.jpg', None], dtype=object)] | docs.kde.org |
Note: Most user interface tasks can be performed in Edge Classic or the New Edge experience. For an overview, getting started topics, and release notes specific to the New Edge experience, see the docs.
Update a Config Set
PUT
Update a Config Set
Updates an Apigee Test Config Set by ID. (To get the IDs of your Config Sets, use List Config Sets.)
If your Config Set contains a lot of variables, consider backing it up before updating it. To back it up, Get a Config Set and save the response as a file.
In the payload, you must supply at least the name and one variable, as shown in the sample payload.
- Rename the Config Set: Enter an updated name in the payload.If you want to leave the existing variables as they are, be sure to include all existing variable keys and values in the payload.
- Replace the value of a variable: Enter the existing
keyname and enter a different
valuefor the variable.
- Change or delete variables: The variables you include in the payload become the variables in the Config Set. To keep existing variables as is, be sure to include them in the payload.
Resource URL /organizations/{org_name}/configsets/{config_set_id}
Header Parameters?) | http://docs.apigee.com/apigee-test/apis/put/organizations/%7Borg_name%7D/configsets/%7Bconfig_set_id%7D | 2017-04-23T05:42:13 | CC-MAIN-2017-17 | 1492917118477.15 | [] | docs.apigee.com |
To add individual Investments to your Investment Account, navigate to the view, select the tab, and choose the account where the investment is held from the Select Account drop-down box.
Right-click the mouse in the empty space in the view. This brings up the context menu. Choose from this menu. This launches the New Investment Wizard which you use to create your new Investment.. | https://docs.kde.org/stable4/en/extragear-office/kmymoney/details.investments.securities.html | 2017-04-23T05:33:00 | CC-MAIN-2017-17 | 1492917118477.15 | [array(['/stable4/common/top-kde.jpg', None], dtype=object)
array(['investments_summarytab.png', 'Investment View, Equities Tab'],
dtype=object) ] | docs.kde.org |
Steps
In the workflow graph, click the connector between the Start and End nodes, then click the + icon.
Click the Sqoop icon to add another Sqoop action node to the workflow.
This Sqoop action will be used to load the transformed data to a specified location.
Click the Sqoop node in the workflow graph and rename it using a descriptive name.
For example, name the node
sqoop-load.
This is necessary because there will be two Sqoop actions in this workflow, and each node in a workflow must have a unique name. Having descriptive node names is also helpful when identifying what a node is intended to do, especially in more complicated workflows.
Click the Sqoop node again and then click the Action Settings gear icon.
In the Sqoop action dialog box, select Command.
In the Command field, enter a command to extract data.
For example:
export --connect jdbc:mysql://wfmgr-5.openstacklocal/customer-data --username wfm --password-file /user/wfm/.password --table exported --input-fields-terminated-by "\001" --export-dir /usr/output/marketing/customer_id
The password for user wfm is called from a password file.
In the Advanced Properties section, browse to the directory that contains the Hive and Tez configuration files you copied into a
libdirectory and add those resources to the File fields.
For example:
/user/wfm/oozie/apps/lib/lib_$TIMESTAMP/hive/hive-conf.xml
/user/wfm/oozie/apps/lib/lib_$TIMESTAMP/tez/tez-conf.xml
In the Prepare section, select delete, and then browse for or type the path to be deleted.
Selecting delete ensures that if a job is interrupted prior to completion, any files that were created will be deleted prior to re-executing the job, otherwise the rerun cannot complete.
You can optionally include the
deleteoption in the Command field.
Use the default settings for the remaining fields and options.
Click Save and close the dialog box.
More Information | https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_workflow-management/content/create_sqoop_load_action.html | 2018-04-19T11:39:48 | CC-MAIN-2018-17 | 1524125936914.5 | [] | docs.hortonworks.com |
As with all other Ontolica web parts, you can configure the Best Bets Best Bets Web Part configuration page
The various settings provided here are described in the sections below.
Feedback
Thanks for your feedback.
Post your comment on this topic. | http://docs.surfray.com/ontolica-search-preview/1/en/topic/finding-the-best-bets-configuration-settings | 2018-04-19T11:40:58 | CC-MAIN-2018-17 | 1524125936914.5 | [array(['http://manula.r.sizr.io/large/user/760/img/using-the-search-result-actions-toolbar-129.png',
None], dtype=object) ] | docs.surfray.com |
Specifying the alias (alternative name) for an environment or a separate node can greatly facilitate the process of their management. It clarifies which item you are working with, so you’ll never make a mistake while choosing the environment/node that needs to be adjusted.
This ability is especially useful while working with numerous nodes of the same type, possibly due to the multi nodes feature. Let’s consider this on the example of defining the master and slave nodes in a DB cluster.
1. Select the necessary environment with a set of same-type nodes and expand the instances list:
2. Choose the node you would like to add the label for and click the Set Alias pencil pictogram next to it (or simply double-click on the Node ID: xxx string). Whatever you enter into the appeared input field, the value will be automatically saved.
3. In the same way you can add a label for a whole environment (wherein the domain name will remain the same).
Such a custom name will define the corresponding item in all the appropriate lists:
at the dashboard
Also, these labels are visible for other users in collaboration and remain attached after environment’s cloning, transferring, etc.
Deleting the alias name anytime will return the default value. | https://docs.jelastic.com/ru/environment-aliases | 2018-04-19T11:51:18 | CC-MAIN-2018-17 | 1524125936914.5 | [array(['https://download.jelastic.com/index.php/apps/files_sharing/publicpreview?file=%2F%2Finstaces-list.png&x=1920&a=true&t=d6d7b4c404e4aa4399cd14c3a720711e&scalingup=0',
None], dtype=object)
array(['https://download.jelastic.com/index.php/apps/files_sharing/publicpreview?file=%2F%2Fset%20alias.png&x=1920&a=true&t=d6d7b4c404e4aa4399cd14c3a720711e&scalingup=0',
None], dtype=object)
array(['https://download.jelastic.com/index.php/apps/files_sharing/publicpreview?file=%2F%2Fenv%20label.png&x=1920&a=true&t=d6d7b4c404e4aa4399cd14c3a720711e&scalingup=0',
None], dtype=object)
array(['https://download.jelastic.com/index.php/apps/files_sharing/publicpreview?file=%2F%2Fdeploy%20to.png&x=1920&a=true&t=d6d7b4c404e4aa4399cd14c3a720711e&scalingup=0',
None], dtype=object)
array(['https://download.jelastic.com/index.php/apps/files_sharing/publicpreview?file=%2F%2Fssh%20env.png&x=1920&a=true&t=d6d7b4c404e4aa4399cd14c3a720711e&scalingup=0',
None], dtype=object)
array(['https://download.jelastic.com/index.php/apps/files_sharing/publicpreview?file=%2F%2Fssh%20nodes.png&x=1920&a=true&t=d6d7b4c404e4aa4399cd14c3a720711e&scalingup=0',
None], dtype=object) ] | docs.jelastic.com |
Protocol: SC.DragSourceProtocol
The
SC.DragSourceProtocol protocol defines the properties and methods that you may implement in
your drag source objects in order to access additional functionality of SproutCore's drag support.
If you implement the
SC.DragSourceProtocol protocol on your drag's source, it will receive a
series of callbacks throughout the course of the drag, and be consulted about what operations to
allow on a particular candidate drop target. Note that when you initiate a drag you must also
provide an object implementing
SC.DragDataSourceProtocol, which includes some required
methods. A single object may serve as both the drag's source and its data source.
Note: Do not mix
SC.DragSourceProtocol into your classes. As a protocol, it exists only for
reference sake. You only need define any of the properties or methods listed below in order to use
this protocol.*
Defined in: drag_source_protocol.js
Field Summary
- SC.DragSourceProtocol.ignoreModifierKeysWhileDragging
Class Methods
- dragDidBegin(drag, loc)
- dragDidCancel(drag, loc, op)
- dragDidEnd(drag, loc, op)
- dragDidMove(drag, loc)
- dragDidSucceed(drag, loc, op)
- dragSlideBackDidEnd(drag)
- dragSourceOperationMaskFor(drag, dropTarget)
Field DetailSC.DragSourceProtocol.ignoreModifierKeysWhileDragging Boolean
If this property is set to
NO or is not implemented, then the user may
modify the drag operation by changing the modifier keys they have
pressed.
- Default value:
- NO
Class Method Detail
This method is called when the drag begins. You can use this to do any visual highlighting to indicate that the receiver is the source of the drag.
This method is called if the drag ends without being handled, or if a drop
target handles it but returns
SC.DRAG_NONE.
This method is called when the drag ended, regardless of whether it succeeded or not. You can use this to do any cleanup.
This method is called whenever the drag image is moved. This is
similar to the
dragUpdated() method called on drop targets.
This method is called if the drag ends and is successfully handled by a
drop target (i.e. the drop target returns any operation other than
SC.DRAG_NONE).
If a drag is canceled or not handled, and has its
slideBack property set
to
YES, then the drag's ghost view will slide back to its initial location.
dragDidEnd is called immediately upon
mouseUp;
dragSlideBackDidEnd is called
after the slide-back animation completes.
Return a bitwise OR'd mask of the drag operations allowed on the specified target. If you don't care about the target, just return a constant value. If a drag's source does not implement this method, it will assume that any drag operation (SC.DRAG_ANY) is allowed. | http://docs.sproutcore.com/symbols/SC.DragSourceProtocol.html | 2018-04-19T11:52:38 | CC-MAIN-2018-17 | 1524125936914.5 | [] | docs.sproutcore.com |
An Act to renumber 100.335 (2) and 100.335 (3); to renumber and amend 100.335 (1); to amend 100.335 (title), 100.335 (4) (b), 100.335 (4) (c), 100.335 (4) (d), 100.335 (5) and 100.335 (6); and to create 100.335 (1) (b), 100.335 (2) (title), 100.335 (3m), 100.335 (4) (title) and 100.335 (7) (title) of the statutes; Relating to: manufacture and sale of food and beverage containers that contain bisphenol A and providing penalties. (FE) | https://docs.legis.wisconsin.gov/2013/proposals/ab607 | 2016-02-06T03:03:40 | CC-MAIN-2016-07 | 1454701145751.1 | [] | docs.legis.wisconsin.gov |
API.
When an app makes a request to your API, the app must supply a valid key. At runtime, the Verify API Key policy checks that the supplied API key:
- Is valid
- Hasn't been revoked
- this first.
-", called the Path, along with the HTTP verb, GET, used to access the API proxy. While you define the conditional flow in this step, you add the processing steps specific to the conditional flow later in the tutorial.
- In the main menu of the management UI, click APIs to display the API Proxies page. If the API Platform page is not open, click here.
- Click weatherapikey in the API Proxies table.
- Click the Develop tab in the upper right of the API proxy page.
- Click the "+" sign to the right of default under Proxy Endpoints to add a new conditional flow.
- In the New Conditional Flow dialog box:
- Enter forecast for the Flow Name.
- Enter weather conditional flow for the Description.
- Select Path and Verb for the Condition Type.
- Enter
/forecastrssas the Path.
- Choose GET for the Verb.
- Leave the Optional Target URL area blank.
- Click Add.
- Click Save in the upper-left corner to save your changes to the API proxy. Your conditional flow is now added to the.
- Key-path}?: Send Docs Feedback
(Incorrect? Unclear? Broken link? Typo?) | http://docs.apigee.com/api-services/tutorials/secure-calls-your-api-through-api-key-validation?rate=PLjCPr3NHV_7FtdqZ0hmkI_7Z-gk7zrrt4MO0KO7oUQ | 2016-02-06T03:22:10 | CC-MAIN-2016-07 | 1454701145751.1 | [] | docs.apigee.com |
Changes related to "J1.5:Developing a MVC Component/Creating an Administrator Interface"
← J1.5:Developing a MVC Component/Creating an Administrator Interface
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
No changes during the given period matching these criteria. | https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&from=20130923123446&target=J1.5%3ADeveloping_a_MVC_Component%2FCreating_an_Administrator_Interface | 2016-02-06T04:13:47 | CC-MAIN-2016-07 | 1454701145751.1 | [] | docs.joomla.org |
This.
Construct a new AES key schedule using the specified key data and the given initialization vector. The initialization vector is not used with ECB mode but is important for CBC mode. See MODES OF OPERATION for details about cipher modes.
Use a prepared key acquired by calling Init to encrypt the provided data. The data argument should be a binary array that is a multiple of the AES block size of 16. This is the default mode of operation for this module.
%
"Advanced Encryption Standard", Federal Information Processing Standards Publication 197, 2001 ()
This document, and the package it describes, will undoubtedly contain bugs and other problems. Please report such in the category aes of the Tcllib Trackers. Please also report any ideas for enhancements you may have for either package and/or documentation.
<[email protected]> | http://docs.activestate.com/activetcl/8.6/tcllib/aes/aes.html | 2016-02-06T02:44:28 | CC-MAIN-2016-07 | 1454701145751.1 | [] | docs.activestate.com |
JDatabaseQueryMySQLi::unlock
From Joomla! Documentation
Revision as of::unlock
Description
Method to unlock the database table for writing.
Description:JDatabaseQueryMySQLi::unlock [Edit Descripton]
public function unlock (&$db)
- Returns boolean True on success.
- Defined on line 512 of libraries/joomla/database/database/mysqliquery.php
- Since
See also
JDatabaseQueryMySQLi::unlock source code on BitBucket
Class JDatabaseQueryMySQLi
Subpackage Database
- Other versions of JDatabaseQueryMySQLi::unlock
SeeAlso:JDatabaseQueryMySQLi::unlock [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=API17:JDatabaseQueryMySQLi::unlock&direction=next&oldid=56457 | 2016-02-06T03:22:33 | CC-MAIN-2016-07 | 1454701145751.1 | [] | docs.joomla.org |
Difference between revisions of "Marketing Working Group"
From Joomla! Documentation
Revision as of 23:57, 15 May 2014
Connect with your local JUG
Depending on where you live, there may be a good chance that there are people who love Joomla and are already active nearby. Check the events.joomla.org to get in contact with your local JUG..
Want to make a bigger impact?
Join the Marketing Team. | https://docs.joomla.org/index.php?title=Marketing_Working_Group&diff=prev&oldid=118202 | 2016-02-06T04:13:02 | CC-MAIN-2016-07 | 1454701145751.1 | [] | docs.joomla.org |
, you can define your own counters. Some of the things that you might track with a user-defined counter are:
- How many times people click on the help button in your application.
- How many times your game is played each day.
- How many times your banner ads are clicked each day.
Help or comments?
- If something's not working: Ask the Apigee Community or see Apigee Support.
- If something's wrong with the docs: Send Docs Feedback
(Incorrect? Unclear? Broken link? Typo?) | http://docs.apigee.com/app-services/content/events-and-counters | 2016-02-06T02:51:51 | CC-MAIN-2016-07 | 1454701145751.1 | [] | docs.apigee.com |
Revision history of "JAuthentication:: construct/11.1"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 13:44, 10 May 2013 JoomlaWikiBot (Talk | contribs) moved page JAuthentication:: construct/11.1 to API17:JAuthentication:: construct without leaving a redirect (Robot: Moved page) | https://docs.joomla.org/index.php?title=JAuthentication::_construct/11.1&action=history | 2016-02-06T03:22:04 | CC-MAIN-2016-07 | 1454701145751.1 | [] | docs.joomla.org |
Revision history of "Developing a MVC Component/Adding categories"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 06:30, 10 July 2015 FuzzyBot (Talk | contribs) moved page J3.x:Developing a MVC Component/Adding categories to J3.x:Developing an MVC Component/Adding categories without leaving a redirect (Part of translatable page "J3.x:Developing a MVC Component".) | https://docs.joomla.org/index.php?title=J3.2:Developing_a_MVC_Component/Adding_categories&action=history | 2016-02-06T04:04:38 | CC-MAIN-2016-07 | 1454701145751.1 | [] | docs.joomla.org |
Tuple Types
A tuple is a well-defined group of values of specific types that can be handled together as a single grouped value, and also be taken apart into their individual values easily. Tuples provide a more lightweight way to group related values without the need of declaring, for example, an explicit Record type.
A tuple type is expressed with the
tuple of keywords, followed by a list of two or more types (since a tuple of just one value makes very little sense).
method ExtractValues(s: String): tuple of (String, Integer);
The method declared above would return a tuple consisting of a String and an Integer.
A tuple value can be constructed simply by providing a matching set of values surrounded by parentheses. The following
result assignment would work for the above method.
result := ("A String", 5);
Tuple values can be assigned in whole or as their individual parts, both when assigning from a tuple or to one:
var t: tuple of (String, Int); var s: String := "Hello" var i: Integer := 5; t := (s, i); // assigning individual values to a tuple var u := t; // assigning one tuple to another (s, i) := ExtractValues("Test"); // assigning a tuple back to individual elements
Extracting a tuple back to individual items can even be combined with a
var Statement, to declare new variables for the items:
var t := ExtractValues("Test"); var (a, b) := ExtractValues("Test"); // assigning a tuple back to individual elements
Here, three new variables are declared. For the first call,
t is declared as new tuple variable, so far so unusual. For the second call though, two new variables
a and
b are declared, and the tuple is automatically taken apart, so that
a would hold the String value and
b the Integer.
Tuples and Discardable
Tuple extraction can also be combined with a [Discardable] Expression(../Expressions/Discardable). If only some of the values of a tuple are of interest, the
nil keyword can be provided in place of the items that are not of interest, and the will be discarded.
var (FirstName, nil, Age) := GetFirstNameLastNameAndAge();
Here, assuming that
GetFirstNameLastNameAndAge returns a tuple of three values of information about a person, but only two variables are declared, for the
FirstName and
Age, the middle value of the tuple is simply discarded.
Accessing Individual Tuple Items
Instead of extracting the whole tuple, individual values inside a tuple can also be accessed directly, with the Indexer Expression:
var Info := GetFirstNameLastNameAndAge(); writeLn($"{Info[0]} is {Info[2]" years old".)
While in syntax this access looks like an array access, the access to to each item of the tuple is strongly typed, so
Info[0] is treated as a String, and
Info[2] as an Integer, for this example. For this reason, a tuple can only be indexed with a constant index.
Named Tuples
Tuples can optionally be defined to provide names for their values. Either all or none of the values need to have a name, a tuple cannot be "partially named". A named tuple can be initialized with a tuple literal with or without names.
var Person: tuple of (Name: String, Age: Integer); Person := (Name := "Peter", Age := 25); Person := ("Paul", 37);
In a named tuple, individual items can be accessed both via index as outlined above, and via name:
writeLn($"{Person.Name} is {Person[1]" years old".)
Named and unnamed tuples (and tuples with mismatched names) are assignment compatible, as long as the types of the tuple items matches.
var Person: tuple of (Name: String, Age: Integer); var Person2: tuple of (String, Integer); Person := Person2; Person2 := Person;
See Also
- Discardable Expression | https://docs.elementscompiler.com/Oxygene/Types/Tuples/ | 2022-09-24T21:52:14 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.elementscompiler.com |
Range Slider Fieldtype
The Range Slider fieldtype allows the user to select numerical value. It is rendered as
range HTML input type with some additional styling, allowing users to precisely select the value.
The Range Slider fieldtype shows two sliders on the same scale, allowing to select a range of numbers (from…to).
> Range Slider can be rendered as a single template tag as well as tag pair.
Single tag
{my_range_slider_field}
The output would be similar to
12 - 43
Single Tag Parameters:
decimal_place="2"
The number of decimal digits to show after the number
prefix="yes"
Include prefix before the value, as specified in field settings
suffix="yes"
Single Tag Modifiers
{my_range_slider_field:min}
Field minimal possible value, as specified in settings.
{my_range_slider_field:max}
Field maximal possible value, as specified in settings.
{my_range_slider_field:prefix}
Field prefix, as specified in settings.
{my_range_slider_field:suffix}
Field suffix, as specified in settings.
{my_range_slider_field:from}
First range slider value.
{my_range_slider_field:to}
Second range slider value.
Tag pair
{my_range_slider_field} between {from} and {to} {/my_range_slider_field}
The output would be similar to
between 12 and 43
Tag Pair Parameters:
decimal_place="2"
The number of decimal digits to show after the number
prefix="yes"
Include prefix before the value, as specified in field settings
suffix="yes"
Tag Pair Variables:
{from}
First range slider value.
{to}
Second range slider value. | https://docs.expressionengine.com/latest/fieldtypes/range-slider.html | 2022-09-24T21:48:49 | CC-MAIN-2022-40 | 1664030333541.98 | [array(['../_images/field_slider.png', 'slider field'], dtype=object)] | docs.expressionengine.com |
# Configurator tutorial
This tutorial aims to be a simple copy&paste tutorial. It's not intended for production code (browser compatibility etc is not considered). This tutorial should just show you how the building blocks fit togehter and how you can use them. Adding complicated build pipes, writing very generic code, browser fallbacks etc will only discard the reader. So keep it simple and please adopt the code snippets to your real world needs.
To get a better big picture and overview about the Roomle ecosystem we advocate to read the getting started section. Nevertheless it's possible to only read the tutorial to implement your first app which uses the Roomle Web SDK.
# Planning your app
At first it's important to get an idea where and how you want to use the parts of our SDK in your app. You can combine our SDK with every framework and project setup you want. So you are not limited to something specific. The only thing we recommend is the use of TypeScript but since our package is distributed as ES6 module it should also work with plain JavaScript. A small disclaimer here: we really believe in the benefits of TypeScript so we do not extensively test the JavaScript only variant.
In this tutorial we will create a web app which shows a furniture on the right side and on the left side we will see the parameters, addons and a button to show the part list including a perspective image of the current configuration. The mock up of the app we are going to build looks like:
# Mockups
This is how the page could look like on initial load. The big image placeholder in the middle will be replaced by the canvas in which the 3D scene takes place
When the user clicks on checkout we want to give her or him an overview about the current configuration. For the sake of simplicity we just do this in a modal
# Example repo
You can follow the progress of the tutorial in the example repo here (opens new window). To give you a working example we use some build tools. For simplicity we use Rollup.js (opens new window) with the rollup-plugin-typescript (opens new window) for transpiling our TypeScript code and rollup-plugin-node-resolve (opens new window) to include libraries from the
node_modules folder. Also we need rollup-plugin-copy (opens new window) to copy the Roomle Assets into a place where they are accessible via browser (see assets). We do not use any tslint or transpiling magic because it's not the aim of the example repo to provide production code. We want to highlight the building blocks you need and not draw your attention to certain code quality tools. This does not mean you shouldn't use them in your production app. In contrast, we strongly recommend to include as quality assurance tools into your development workflow!
So let's get started with:
git clone [email protected]:Roomle/web-sdk-configurator-example.git roomle-web-sdk-configurator-example cd roomle-web-sdk-configurator-example git checkout chapters/create-your-first-app npm install # how to see results from chapter one see on the next page ;-) | https://docs.roomle.com/web/guides/tutorial/configurator/ | 2022-09-24T23:40:23 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.roomle.com |
An abbreviation for Structured Query Language. SQLstream uses a slightly modified version of SQL called streaming SQL. The type of SQL that runs in s-Server is called streaming SQL. s-Server’s streaming SQL is described in the SQLstream Streaming SQL Reference Guide.
SQLstream’s main enhancement to the SQL standard concerns the STREAM object. The process of creating streams in streaming SQL is similar to the process of creating tables in a database system like PostGres or Oracle. Like database tables, streams have columns with column types. Once you create a stream, you can query it with a SELECT statement, or insert into it using an INSERT statement. | https://docs.sqlstream.com/glossary/sql/ | 2022-09-24T23:06:18 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.sqlstream.com |
Audit Event Logs
last edited on: Sep. 14, 2022
Audit events track important activity within the system including file events, permission changes and configuration information. Events may be initiated by users, or they may be generated from system events including background tasks and synchronization events.
The Audit logs are available when logged in as the tenant Admin from the Audit Reports section of the Admin options.
Audit logs can be filtered, archived, and/or exported.
Audit logs can capture information that is specific to a tenant user but also file sharing information such as the remote IP address of users accessing file shares. System tasks can also be captured by the audit event logs, dependent on the granularity that has been set. Audit events that have an IP address of 1.1.1.1 are system-generated events, that may or may not be based on user interaction.
If you want to enable the Audit logs to be accessible from the base OS then you can configure the logs to be output to syslog and they will be available in both places.
To enable audit logs see step 4.
To view and export audit logs see step 5.
Writing Audit Event Logs to syslog
Step 1 - Appliance Admin Setting
syslog is a standard for message logging. It allows separation of the software that generates messages and is often used from a software perspective for security audit logging. Such messages can subsequently be integrated into log aggregation tools such as Splunk.
Splunk is widely used among enterprise security teams for breach investigations. Enabling syslog provides the ability to feed audit events into Splunk, enabling companies to evaluate potential data breaches through the same means they use to investigate issues with other internally used applications and/or services.
The syslog functionality can be enabled by logging in as appladmin, going to Site Functionality and setting “Enable write audit events to syslog” to yes.
Step 2 - Organization Admin Setting
Login as org admin to your account and from the Organization Menu go to Policies > Security and set “Write Audit Events to syslog file:” to yes
The audit logs now will be written to
/var/log/messages in the appliance
Sending syslog Entries to rsyslog Service
Appliance:
SSH in as
smeconfiguser and then
su to
root. Edit
/etc/rsyslog.conf and at the bottom of file add line:
*.* @IP_OF_REMOTE_SYSLOG
Restart the syslog service:
systemctl restart rsyslog
The logs will be sent using UDP protocol and by default port 514 is used.
Install rsyslog:
If you have not already done so, you will need to install and configure rsyslog on a separate machine please see | https://docs.storagemadeeasy.com/cloudappliance/syslog | 2022-09-24T23:46:33 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.storagemadeeasy.com |
Appliance Versions Support Policy
Last Updated Sept. 29, 2020
As features are added or issues resolved the appliance version number changes. This document explains how we number appliance versions, what versions are eligible for fixes and improvements under our standard support agreement, and how customers may be able to receive improvements for older versions.
See also File Fabric Versions Support Status.
Version Numbers
- Each release has a major version number and a minor version number separated by a period, for example: 1803.03
- The first release of a major version usually has a minor number of “00”, in which case we may refer to the version without its minor number, for example: 1803
New Version Packaging
- The first release of a major version is provided as VM images for a variety of hypervisors, and usually as an update that can be applied to older versions.
- In some cases, however, to upgrade to a new major version it may be necessary to deploy a new VM and migrate from the old version to the new version.
- Minor versions for a major version that has already been released are provided only as updates that can be applied to any previous minor version of that major version.
- We sometimes refer to updates from one minor version of a major version to another minor version of the same major version as “service packs”.
Patches
- Patches are sometimes created between versions for specific customers.
- In some cases they are numbered with a identifier that is separated from the minor version number by a dash, for example: 1803.03-SME-2345
- In other cases patches are identified by a name referring to the customer for which the patch was created and/or the nature of the patch, for example: 1803.03 CloudBusters
- Every patch that has been issued since a version was released is incorporated into the next release unless timing prohibits the inclusion of a patch, in which case it will be included in the following release.
Fixes and Enhancements Under Standard Support
For customers who have purchased our standard support agreement, we provide software improvements as follows:
- Standard support provides support for the last two major releases i.e. the current major release (and minor releases therein) and the one preceding it (and minor releases therein).
- Patches are provided for what are deemed critical customer issues.
- Where an issues is not deemed critical it may not be addressed until a future minor or major release.
- Enhancements are driven by our product roadmap and often include changes requested by customers.
- Enhancements may be released in major or minor versions or occasionally in customer-specific patches.
Upgrading
A self-service procedure is provided for most upgrades, with notification being provided if this is not the case. The procedure for upgrading to each new version is certified to work for the two major versions that were under support just prior to the new version's release. For example, if versions 1906 and 2006 were the supported versions when version 2101 was released, then the upgrade to version 2101 would be certified for versions 1906 and 2006 (including their minor versions). Upgrades from older versions are not certified and may require paid professional services from SME.
Support for Older Versions
SME may be able to provide issue resolutions and/or functional enhancements for versions that fall outside of the policy described in the previous section on a professional services basis. Customers interested in such an arrangement should contact their SME sales representative. | https://docs.storagemadeeasy.com/cloudappliance/versionsupport | 2022-09-24T22:59:54 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.storagemadeeasy.com |
LVM Operation
LVM is currently the standard volume management product included with all of the major Linux distributions. LVM allows multiple physical disks and/or disk partitions to be grouped together into entities known as volume groups. Volume groups may then be divided or partitioned into logical volumes. Logical volumes are accessed as regular block devices and as such may be used by file systems or any application that can operate directly with a block device.
Logical volume managers are principally used to simplify storage management. Logical volumes can be resized dynamically as storage requirements change, and volume groups and logical volumes can be sensibly named with identifiers chosen by the administrator rather than physical disk or partition names such as sda or sdc1.
The following diagram shows the relationship of the LVM entities. File systems or applications use logical volumes. Logical volumes are created by partitioning volume groups. Volume groups consist of the aggregation of one or more physical disk partitions or disks.
Figure 1: Logical Volume Manager Entity Relationships
SPS for Linux LVM Recovery Kit
The SPS LVM Recovery Kit provides the support needed to allow other SPS recovery kits to operate properly on top of Linux logical volumes. To accomplish this support, the LVM Recovery Kit installs two new resource types: lvmlv and lvmvg which correspond to logical volumes and volume groups respectively. The lvmlv and lvmvg resources exist solely for internal use so that other SPS resources can operate.
As shown in Figure 1, each volume group has one or more logical volumes that depend on it. Conversely, each logical volume must have a volume group on which it depends. A typical SPS hierarchy containing these two LVM resources looks much like the relationships shown in Figure 1. Refer to Figure 2 in the SPS LVM Hierarchy Creation and Administration section for an example of an actual SPS hierarchy.
The LVM Recovery Kit uses the commands provided by the lvm package to manage the volume group and logical volume resources in an SPS hierarchy. Volume groups and logical volumes are configured (or activated) when a hierarchy is being brought in service during a failover or switchover operation and are unconfigured when a hierarchy is being taken out of service.
Post your comment on this topic. | https://docs.us.sios.com/spslinux/9.4.1/en/topic/lvm-recovery-kit-overview | 2022-09-24T22:52:11 | CC-MAIN-2022-40 | 1664030333541.98 | [] | docs.us.sios.com |
miniOrange Single Sign-On APIs allows you to integrate sso quickly and secure access to your applications.
OpenID Connect allows clients of all types, including Web-based, mobile, and JavaScript clients, to request and receive information about authenticated sessions and end-users.
miniOrange Multi-factor Authentication Service provides various types of authentication methods which can be easily configured and used for authentications.
miniOrange User APIs can be used to create, update, get users.
Adaptive Multi-factor uses both device fingerprints and behavioral data to come up with a risk score, based on which you either permit or deny access.
miniOrange Groups APIs can be used to create, update, get users. | https://docs.miniorange.com/product-apis | 2020-05-25T01:47:02 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.miniorange.com |
Providing developers and users access to specific environments is an important part of the application development lifecycle. Environments that support active development, testing, User Acceptance Testing (UAT), and production provide a full range of functionality.
Development environments can be created on servers with and without physical NVDIMMs installed. For servers without physical NVDIMMs, NVDIMM functionality can be emulated using volatile DDR memory. Several methods exist to create environments with emulated NVDIMMs. These are described in the following sections.
Application development using the PMDK can be done using traditional memory mapped files without emulating NVDIMMs. Such files can exist on any storage media. However, data consistency assurance embedded within PMDK requires frequent synchronization of data that is being modified. Depending on platform capabilities, and underlying device where the files are, a different set of commands is used to facilitate synchronization. It might be
msync(2) for the regular hard drives, or combination of cache flushing instructions followed by memory fence instruction for the real persistent memory. Calling
msync or
fsync frequently can cause significant IO performance issues. For this reason, it is not recommended to use this approach for persistent memory application development.
The following sections describe how to create development environments using a variety of technologies for different operating systems:
Linux
Windows
Virtualization | https://docs.pmem.io/persistent-memory/getting-started-guide/creating-development-environments | 2020-05-25T00:39:26 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.pmem.io |
The torch touch signal is used to find the top of the workpiece before starting a cut. Generally a switch or sensor is mounted on Z axis floating head, this input is then used internally by MASSO to automatically offset the Z axis gap from the switch / sensor. This input is used with G38.2 command.
As each machine floating head gap between the switch / sensor to the torch tip is different, you can enter the distance in the F1-Setup screen under Torch Height Control settings.
By setting this value MASSO will internally offset this to automatically position the torch touch position, this also saves time and avoids confusion setting the offset values in CAM software.
Below is a simple wiring example showing how to wire a switch. A 5 to 24 VDC signal can be used. | https://docs.masso.com.au/wiring-and-setup/plasma-torch-height-control/torch-touch-floating-head-signal | 2020-05-25T00:54:55 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.masso.com.au |
In order to take full advantage of Unity system and retargetingApplying animations created for one model to another. More info
See in Glossary, you need to have a rigged and skinned humanoid type mesh.
A character ModelA 3D model representation of an object, such as a character, a building, or a piece of furniture. More info
See in GlossaryA physics component allowing a dynamic connection between rigidbodies, usually allowing some degree of movement such as a hinge. More info
See in GlossaryA skeletal hierarchy of joints for a mesh. More info
See in Glossary and skin your own character from scratch using a 3D modeling application.
This is the process of creating your own humanoid-poseThe pose in which the character has their arms straight out to the sides, forming a “T”. The required pose for the character to be in, in order to make an Avatar.
See in GlossaryAn interface for retargeting animation from one rig to another. More info
See in Glossary, - spine - chest - shoulders - arm - forearm - hand * HIPS - spine - chest - neck - head * HIPS - UpLeg - Leg - foot - toe - toe_endHeatmaps are a spatial visualization of analytics data. More info
See in Glossary”..: | https://docs.unity3d.com/2018.4/Documentation/Manual/UsingHumanoidChars.html | 2020-05-25T01:56:51 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.unity3d.com |
front ]
Delete a distribution.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
delete-distribution --id <value> [--if-match <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--id (string)
The distribution ID.
--if-match (string)
The value of the ETag header that you received when you disabled the distribution. For example: E2QWRUHAPOMQZL .
- CloudFront distribution
The following example deletes the CloudFront distribution with the ID EDFDVBD6EXAMPLE. Before you can delete a distribution, you must disable it. To disable a distribution, use the update-distribution command. For more information, see the update-distribution examples.
When a distribution is disabled, you can delete it. To delete a distribution, you must use the --if-match option to provide the distribution's ETag. To get the ETag, use the get-distribution or get-distribution-config command.
aws cloudfront delete-distribution \ --id EDFDVBD6EXAMPLE \ --if-match E2QWRUHEXAMPLE
When successful, this command has no output. | https://docs.aws.amazon.com/cli/latest/reference/cloudfront/delete-distribution.html | 2020-05-25T01:33:15 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.aws.amazon.com |
Alfresco Content Services allows more than one workflow engine.
The following figure shows the high‐level architecture for workflow.
You can design workflow definitions using a graphical workflow designer that supports BPMN 2.0 or write the XML BPMN 2.0 process definition directly using an XML editor. Many workflow editors support BPMN 2.0 but might not understand some of the features of Alfresco Content Services workflow. We recommend the use of the Activiti eclipse designer plug‐in for Eclipse that is Alfresco Content Services-aware.
You can deploy a workflow using the Alfresco Content Services Workflow Console, or by using a Spring Bean.
Alfresco.
Alfresco Content Services allows you to access your own Java Classes through the delegate handler to support automation in your workflows. The following diagram shows these features:
| https://docs.alfresco.com/5.2/concepts/wf-architecture.html | 2020-05-25T03:04:01 | CC-MAIN-2020-24 | 1590347387155.10 | [array(['https://docs.alfresco.com/sites/docs.alfresco.com/files/public/images/docs/default5_2/wf-arch.jpg',
None], dtype=object)
array(['https://docs.alfresco.com/sites/docs.alfresco.com/files/public/images/docs/default5_2/wf-arch-2.jpg',
'A detailed diagram of Alfresc workflow architecture'],
dtype=object) ] | docs.alfresco.com |
one or more specified images, if the image names or image ARNs are provided. Otherwise, all images in the account are described.
describe-images [--names <value>] [--arns <value>] [--type <value>] [--cli-input-json <value>] [--starting-token <value>] [--page-size <value>] [--max-items <value>] [--generate-cli-skeleton <value>]
--names (list)
The names of the public or private images to describe.
Syntax:
"string" "string" ...
--arns (list)
The ARNs of the public, private, and shared images to describe.
Syntax:
"string" "string" ...
--type (string)
The type of image (public, private, or shared) to describe.
Possible values:
- PUBLIC
- PRIVATE
-.
Images -> (list)
Information about the images.
(structure)
Describes an image.
Name -> (string)The name of the image.
Arn -> (string)The ARN of the image.
BaseImageArn -> (string)The ARN of the image from which this image was created.
DisplayName -> (string)The image name to display.
State -> (string)The image starts in the PENDING state. If image creation succeeds, the state is AVAILABLE . If image creation fails, the state is FAILED .
Visibility -> (string)Indicates whether the image is public or private.
ImageBuilderSupported -> (boolean)Indicates whether an image builder can be launched from this image.
ImageBuilderName -> (string)The name of the image builder that was used to create the private image. If the image is shared, this value is null.
Platform -> (string)The operating system platform of the image.
Description -> (string)The description to display.
StateChangeReason -> (structure)
The reason why the last state change occurred.
Code -> (string)The state change reason code.
Message -> (string)The state change reason message.
Applications -> (list)
The applications associated with the image.
(structure)
Describes an application in the application catalog.
Name -> (string)The name of the application.
DisplayName -> (string)The application name to display.
IconURL -> (string)The URL for the application icon. This URL might be time-limited.
LaunchPath -> (string)The path to the application executable in the instance.
LaunchParameters -> (string)The arguments that are passed to the application at launch.
Enabled -> (boolean)If there is a problem, the application can be disabled after image creation.
Metadata -> (map)
Additional attributes that describe the application.
key -> (string)
value -> (string)
CreatedTime -> (timestamp)The time the image was created.
PublicBaseImageReleasedDate -> (timestamp)The release date of the public base image. For private images, this date is the release date of the base image from which the image was created.
AppstreamAgentVersion -> (string)The version of the AppStream 2.0 agent to use for instances that are launched from this image.
ImagePermissions -> (structure)
The permissions to provide to the destination AWS account for the specified image.
allowFleet -> (boolean)Indicates whether the image can be used for a fleet.
allowImageBuilder -> (boolean)Indicates whether the image can be used for an image builder.
NextToken -> (string)
The pagination token to use to retrieve the next page of results for this operation. If there are no more pages, this value is null. | https://docs.aws.amazon.com/cli/latest/reference/appstream/describe-images.html | 2020-05-25T02:44:52 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.aws.amazon.com |
up to 100 active Amazon Chime SDK meetings. For more information about the Amazon Chime SDK, see Using the Amazon Chime SDK in the Amazon Chime Developer Guide .
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
list-meetings [--next-token <value>] [--max-results <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--next-token (string)
The token to use to retrieve the next page of results.
--max-results (integer)
The maximum number of results to return in a single.
Meetings -> (list)
The Amazon Chime SDK meeting information.
(structure)
A meeting created using the Amazon Chime SDK.
MeetingId -> (string)The Amazon Chime SDK meeting ID.
ExternalMeetingId -> (string)The external meeting ID.
MediaPlacement -> (structure)
The media placement for the meeting.
AudioHostUrl -> (string)The audio host URL.
AudioFallbackUrl -> (string)The audio fallback URL.
ScreenDataUrl -> (string)The screen data URL.
ScreenSharingUrl -> (string)The screen sharing URL.
ScreenViewingUrl -> (string)The screen viewing URL.
SignalingUrl -> (string)The signaling URL.
TurnControlUrl -> (string)The turn control URL.
MediaRegion -> (string)The Region in which to create the meeting. Available values: ap-northeast-1 , ap-southeast-1 , ap-southeast-2 , ca-central-1 , eu-central-1 , eu-north-1 , eu-west-1 , eu-west-2 , eu-west-3 , sa-east-1 , us-east-1 , us-east-2 , us-west-1 , us-west-2 .
NextToken -> (string)
The token to use to retrieve the next page of results. | https://docs.aws.amazon.com/cli/latest/reference/chime/list-meetings.html | 2020-05-25T01:22:46 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.aws.amazon.com |
multiple DataBatch response includes a count of failed records, FailedPutCount , and an array of responses, RequestResponses . Even if the PutRecordBatch call succeeds, the value of FailedPutCount may be greater than 0, indicating that there are records for which the operation didn't succeed.Exception Data Firehose are stored for 24 hours from the time they are added to a delivery stream as it attempts to send the records to the destination. If the destination is unreachable for more than 24 hours, the data is no longer available.
Warning
Don't concatenate two or more base64 strings to form the data fields of your records. Instead, concatenate the raw data, then perform base64 encoding.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
put-record-batch --delivery-stream-name <value> --records <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--delivery-stream-name (string)
The name of the delivery stream.
--records (list)
One or more records.
Shorthand Syntax:
--records Data1 Data2 Data3
JSON Syntax:
[ { "Data":PutCount -> (integer)
The number of records that might have failed processing. This number might be greater than 0 even if the PutRecordBatch call succeeds. Check FailedPutCount to determine whether there are records that you need to resend.
Encrypted -> (boolean)
Indicates whether server-side encryption (SSE) was enabled during this operation.
RequestResponses -> (list)
The results array. For each record, the index of the response element is the same as the index used in the request array.
(structure)
Contains the result for an individual record from a PutRecordBatch request. If the record is successfully added to your delivery stream, it receives a record ID. If the record fails to be added to your delivery stream, the result includes an error code and an error message.
RecordId -> (string)The ID of the record.
ErrorCode -> (string)The error code for an individual record result.
ErrorMessage -> (string)The error message for an individual record result. | https://docs.aws.amazon.com/cli/latest/reference/firehose/put-record-batch.html | 2020-05-25T02:19:47 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.aws.amazon.com |
After installing this theme, most important task you gotta do is install the required plugins.
Note: Make sure you active the main theme, not the child theme, after installing the plugins you can move to the child theme..
Bright CPT and Shortcode plugin contain all the ShortCodes and custom post type for this theme. And Learnpress is the LMS ( Learning management system ) core, what we are using for this theme.
Image: Plugin install notice
Image : Install required plugins | http://docs.wpbranch.com/docs/bright/installing-required-plugins/ | 2020-05-25T01:07:08 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.wpbranch.com |
Oh AOS why have you forbidden me
Sometimes when services are trying to authenticate to an AOS in Dynamics 365 for Finance and Operations, both in the Cloud version and the on-premises version, the calling application may receive the error message "forbidden" back from the AOS. This message is deliberately vague, because we don't want a calling application to be able to poke the AOS and learn about how to get in, but unfortunately that vagueness can make it difficult to figure out what is actually wrong, in this post we'll discuss what's happening in the background and how to approach troubleshooting.
Anything which is calling web services could receive this "Forbidden" error - for example an integrated 3rd party application, or Financial Reporting (formerly Management Reporter).
First let's talk about how authentication to Finance and Operations works, there are two major stages to it:
1. Authentication to AAD (in Cloud) or ADFS (in on-premises)- this is happening directly between the caller and AAD/ADFS - the AOS isn't a part of it.
2. Session creation on the AOS - here the caller is giving the token from AAD/ADFS to the AOS, then AOS attempts to create a session.
The "forbidden" error occurs during the 2nd part of the process - when the AOS is attempting to create a new session. The code within the AOS which does this has a few specific cases when it will raise this:
- Empty user SID
- Empty session key
- No such user
- User account disabled
- Cannot load user groups for user
For all of these reasons the AOS is looking at the internal setup of the user in USERINFO table - it's not looking at AAD/ADFS. In a SQL Server based environment (so Tier1 or on-premises) you can run SQL Profiler to capture the query it's running against the USERINFO table and see what it's looking for.
Examples:
- Financial Reporting (Management reporter) might report "Forbidden" if the FRServiceUser is missing or incorrect in USERINFO. This user is created automatically, but could have been modified by an Administrator when trying to import users into the database.
- When integrating 3rd party applications if the record in "System administration > setup > Azure Active Directory applications" is missing | https://docs.microsoft.com/en-us/archive/blogs/axsa/oh-aos-why-have-you-forbidden-me | 2020-05-25T03:15:10 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.microsoft.com |
TOPICS×
Dashboards
When using AEM you are able of managing a lot of content of different types (e.g. pages, assets). AEM Dashboards provide an easy to use and customizable way to define pages that displays consolidated data.
AEM Dashboards are created on a per user basis, so a user can only access to their own dashboard.
However, Dashboard templates can be used to share common configuration and Dashboard layout.
Administering Dashboards
Creating A Dashboard
To create a new Dashboard, proceed as follows:
- In the Tools section, click Configuration Console .
- In the tree, Double-Click Dashboard .
- Click New Dashboard .
- Type the Title (e.g. My Dashboard) and the Name .
- Click Create .
Cloning A Dashboard
You may want to have multiple dashboards to quickly see information about your content from different views. To help you to create new Dashboard, AEM provides a clone feature that you can use to duplicate an existing Dashboard. To clone a Dashboard, proceed as follows:
- In the Tools section, click Configuration Console .
- In the tree, Click Dashboard .
- Click on the dashboard you want to clone.
- Click Clone .
- Type the Name of your new dashboard.
Removing A Dashboard
- In the Tools section, click Configuration Console .
- In the tree, Click Dashboard .
- Click on the dashboard you want to delete.
- Click Remove .
- Click Yes to confirm.
Dashboard Components
Overview
Dashboard components are nothing more than regular AEM components . This section describes reporting components shipped with AEM.
Web Analytics Reporting Components.
Basic configuration
The Basic tab provides access to the following configuration entries:
Title The title displayed on the dashboard.
Request type The way data are requested.
SiteCatalyst Configuration (optional) The configuration you want to use to connect to SiteCatalyst. If not provided the configuration is assumed to be configured on the Dashboard page (via page properties).
Report Suite ID (optional) The SiteCatalyst report suite you want to use to generate the graph.
Report configuration
In order to display web statistics, you need to define the date range of the data you want to fecth. The Report tab provides two fields to define that range.
Setting a large date range can decrease the responsiveness of the dashboard.
Date From Absolute or relative date from which the data is fetched.
Date To Absolute or relative date to which the data is fetched.
Each component also defines specific settings.
Overtime Report
Date Granularity Time unit of the X axis (e.g. day, hour).
Metrics The list of events you want to display.
Elements The list of elements that breaks down the metrics data in the graph.
Ranked List Report
Elements The element that breaks down the metrics data in the graph.
Metrics The event you want to display.
No. of top items Number of items displayed by the report.
Ranked Report
Metrics The event you want to display.
Elements The element that breaks down the metrics data in the graph.
Top Site Section Report
This component displays a graph showing the more visited section of a website according to the following configuration.
No. of top items Number of section displayed by in the report.
Trended Report
Date Granularity Time unit of the X axis (e.g. day, hour).
Metrics The event you want to display.
Elements The element that breaks down the metrics data in the graph.
Extending Dashboard
Overview
Dashboards are normal pages ( cq:Page ), therefore any components can be used to assemble Dashboards.
There is a default component group Dashboard containing analytics reporting components which are enabled on the template by default.
Creating A Dashboard Template.
Dashboard templates are shared between users.
Developing a Dashboard component. | https://docs.adobe.com/content/help/en/experience-manager-64/administering/operations/dashboards.html | 2020-05-25T02:59:16 | CC-MAIN-2020-24 | 1590347387155.10 | [array(['/content/dam/help/experience-manager-64.en/help/sites-administering/assets/chlimage_1-22.jpeg/_jcr_content/renditions/cq5dam.web.1280.1280.jpeg',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-administering/assets/chlimage_1-26.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-administering/assets/chlimage_1-27.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-administering/assets/chlimage_1-28.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-administering/assets/chlimage_1-29.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-administering/assets/chlimage_1-30.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-administering/assets/chlimage_1-31.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/sites-administering/assets/chlimage_1-32.png',
None], dtype=object) ] | docs.adobe.com |
dask.array.flatnonzero¶
- dask.array.flatnonzero(a)[source]¶
Return indices that are non-zero in the flattened version of a.
This docstring was copied from numpy.flatnonzero.
Some inconsistencies with the Dask version may exist.
This is equivalent to np.nonzero(np.ravel(a))[0].
- Parameters
- aarray_like
Input data.
- Returns
- resndarray
Output array, containing the indices of the elements of a.ravel() that are non-zero.
See also
Examples
>>> x = np.arange(-2, 3) >>> x array([-2, -1, 0, 1, 2]) >>> np.flatnonzero(x) array([0, 1, 3, 4])
Use the indices of the non-zero elements as an index array to extract these elements:
>>> x.ravel()[np.flatnonzero(x)] array([-2, -1, 1, 2]) | https://docs.dask.org/en/latest/generated/dask.array.flatnonzero.html | 2021-10-16T00:41:46 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.dask.org |
Date: Fri, 23 Jan 2009 09:14:07 +0100 From: Polytropon <[email protected]> To: Gary Kline <[email protected]> Cc: Tim Judd <[email protected]>, FreeBSD Mailing List <[email protected]> Subject: Re: how to create a DVD backup filesystem? Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> <[email protected]> <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On Thu, 22 Jan 2009 23:45:16 -0800, Gary Kline <[email protected]> wrote: > On Thu, Jan 22, 2009 at 10:26:22PM -0700, Tim Judd wrote: > > You can always try to tar it up directly > > > > tar -czf /dev/acd0 ~kline/ ~devel/ > > > > Good luck. > > > I do tar ~kline --bzip'd-- and scp it around. 3 times/week. I > want my most important stuff, ~/[DOT] files too, on a DVD. > Y'never know when a meteor will destroy the Earth... . Using tar onto acd may not work, but utilizing atapicam, it could eventually work with cd directly: % tar cvjf /dev/cd0 ~/.* ~/devel ~/music ~/texts But. -- Polytropon >From Magdeburg, Germany Happy FreeBSD user since 4.0 Andra moi ennepe, Mousa, ...
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=3847158+0+/usr/local/www/mailindex/archive/2009/freebsd-questions/20090125.freebsd-questions | 2021-10-16T01:05:35 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.freebsd.org |
Ens.Alerting.Utils
abstract class Ens.Alerting.UtilsUtility class that primarily serves as a wrapper for retrieving configuration data from the current production and supplying defaults otherwise.
Method Inventory (Including Private)
- ConfirmRuleExists()
- FindRecentManagedAlert()
- GetDefaultActionWindow()
- GetDefaultNotificationOperation()
- GetDefaultNotificationRecipients()
- GetItemAlertGroups()
- GetItemBusinessPartner()
- GetNotificationManager()
Parameters
parameter DEFAULTACTIONWINDOW = 60;
Methods
Utility method to ensure that a rule actually exists.
classmethod FindRecentManagedAlert(pAlertRequest As Ens.AlertRequest = "", pSeconds As %Integer = 300, pLogUpdate As %Boolean = 0) as %Integer [ Language = objectscript ]
Function to determine whether a Managed Alert with the same AlertText and SourceConfigName as the supplied pAlertRequest has been created within the previous pSeconds seconds. If such a Managed Alert does exist, the ID of the Managed Alert is returned. If pLogUpdate is true, then this function will assume that a new ManagedAlert will NOT be created and will log an update to the existing ManagedAlert to indicate that the alert has reoccurred. Note that the IsRecentManagedAlert() function in Ens.Alerting.Rule.FunctionSet is a thin wrapper around this method, so care should be taken to maintain compatibility.
Get the default number of minutes in which we expect users to take action.
Get the config name of the default Notification Operation in the current production.
Get a comma-separated list of default recipients for notifications in the current production.
Get the AlertGroups setting for a given config item in the current production.
classmethod GetItemBusinessPartner(pConfigName As %String = "") as %String [ Language = objectscript ]
Get the Business partner name for a given config item in the current production.
Get the config name of the Notification Manager component in the current production. | https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=ENSLIB&PRIVATE=1&CLASSNAME=Ens.Alerting.Utils | 2021-10-16T00:21:24 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.intersystems.com |
adapter-nso Change Logs
2021.1.1 Maintenance Release [2021-07-06]
Overview
- 2 Bug Fixes
- 1 Chores
- 3 Total Tickets
Bug Fixes
- adapter-nso:7.1.3-2021.1.2 [06-28-2021] - Fixed the getDeviceGroups error when a device group name has a space between characters.
- adapter-nso:7.1.3-2021.1.1 [06-14-2021] - Updated error handling to avoid crash on SSL error.
Chores
- adapter-nso:7.1.3-2021.1.0 [06-04-2021] - Updated and pinned dependencies for the next release.
2021.1 Feature Release [2021-05-28]
Overview
- 6 New Features
- 1 Improvements
- 15 Bug Fixes
- 1 Security Fixes
- 2 Chores
- 25 Total Tickets
New Features
- adapter-nso:7.1.0 [05-21-2021] - Added commit flag support for testInstances and saveInstances.
- adapter-nso:6.39.0 [03-29-2021] - Added capability to add device support for live status dynamically.
- adapter-nso:6.38.0 [03-12-2021] - Adapter can now store Yang modules in a database; also provided an API to manually refresh data.
- adapter-nso:6.37.1 [03-08-2021] - Added brokers to the pronghorn.json file. When the adapter is created the brokers will be assigned to the service configuration automatically.
- adapter-nso:6.37.0 [03-08-2021] - Added an API for converting Yang modules to JSON schemas.
- adapter-nso:6.36.0 [02-24-2021] - Added live-status support for Radware VX NED.
Improvements
- adapter-nso:7.0.0 [04-20-2021] - Modified return type to string of getOutOfSyncConfig() when device is in-sync.
Bug Fixes
- adapter-nso:7.1.3 [06-01-2021] - Improved error handling to return an error message when loadConfig fails.
- adapter-nso:7.1.2 [06-01-2021] - Added more details to the netconf error message that displays when there is no connection between IAP and NSO.
- adapter-nso:7.1.1 [05-28-2021] - Running config remediation on a Junos device will now return dry run results from adapter-nso.
- adapter-nso:7.0.2 [05-03-2021] - Fixed default properties of adapter-nso.
- adapter-nso:7.0.1 [04-22-2021] - Fixed issue where the incorrect config in loadConfig would crash the platform.
- adapter-nso:6.39.3 [04-16-2021] - Fixed getConfig and setConfig to allow Junos config remediation.
- adapter-nso:6.39.2 [04-07-2021] - Fixed undefined errors when parsing error response and parsing Yang models.
- adapter-nso:6.38.1 [03-25-2021] - Fixed an error that occurred when saving service instances with plan data.
- adapter-nso:6.36.1 [03-04-2021] - Fixed incorrect XML parsing for '<' symbol.
- adapter-nso:6.35.15 [02-22-2021] - Fixed a null object error when parsing service instances.
- adapter-nso:6.35.14 [02-22-2021] - Missing properties added to the NSO adapter properties schema.
- adapter-nso:6.35.13 [02-19-2021] - Fixed a bug that prevented adding an empty YANG Presence-Container to the configuration when creating a new service instance.
- adapter-nso:6.35.11 [01-25-2021] - Fixed undefined iterate error when doing NETCONF queries.
- adapter-nso:6.35.10 [01-19-2021] - Improved service models request time when an NSO adapter is down. Service Management page will now load with minimal delay.
- adapter-nso:6.35.9 [01-05-2021] - Added detailed error message for checkSyncDevices.
Security Fixes
- adapter-nso:7.0.4 [05-21-2021] - Updated database and any related dependencies to fix dependency security vulnerabilities.
Chores
- adapter-nso:7.0.3 [05-13-2021] - Added documentation for live status device support feature.
- adapter-nso:6.39.1 [04-06-2021] - Moved project to master pipeline.
2020.2.0 Feature Release [2021-01-05]
Overview
- 1 New Features
- 10 Improvements
- 27 Bug Fixes
- 1 Security Fixes
- 3 Chores
- 42 Total Tickets
New Features
- adapter-nso:6.30.0 [04-30-2020] - Added caching functionalities for retrieving device info.
Improvements
-.
Bug Fixes
-.
Security Fixes
- adapter-nso:6.35.1 [11-13-2020] - Removed single quote from Xpath params to avoid an injection vulnerability.
Chores
-..0.0 Feature Release [2019-04-02]
Overview
- 4 New Features
- 5 Improvements
- 20 Bug Fixes
- 1 Security Fixes
- 30.
Security Fixes
- adapter-nso:6.5.5 [03-26-2019] - Upgraded nyc package dependency. | https://docs.itential.com/2021.1/changelog/adapter-nso/ | 2021-10-15T23:25:40 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.itential.com |
Tutorial: Create and configure an Azure Active Directory Domain Services managed domain
Azure Active Directory Domain Services (Azure AD DS) provides managed domain services such as domain join, group policy, LDAP, Kerberos/NTLM authentication that is fully compatible with Windows Server Active Directory. You consume these domain services without deploying, managing, and patching domain controllers yourself. Azure AD DS integrates with your existing Azure AD tenant. This integration lets users sign in using their corporate credentials, and you can use existing groups and user accounts to secure access to resources.
You can create a managed domain using default configuration options for networking and synchronization, or manually define these settings. This tutorial shows you how to use default options to create and configure an Azure AD DS managed domain using the Azure portal.
In this tutorial, you learn how to:
- Understand DNS requirements for a managed domain
- Create a managed domain
- Enable password hash synchronization
If you don't have an Azure subscription, create an account before you begin.
Prerequisites
To complete this tutorial, you need the following resources and privileges:
- An active Azure subscription.
- If you don't have an Azure subscription, create an account.
- An Azure Active Directory tenant associated with your subscription, either synchronized with an on-premises directory or a cloud-only directory.
- You need global administrator privileges in your Azure AD tenant to enable Azure AD DS.
- You need Contributor privileges in your Azure subscription to create the required Azure AD DS resources.
- A virtual network with DNS servers that can query necessary infrastructure such as storage. DNS servers that can't perform general internet queries might prevent the ability to create a managed domain.
Although not required for Azure AD DS, it's recommended to configure self-service password reset (SSPR) for the Azure AD tenant. Users can change their password without SSPR, but SSPR helps if they forget their password and need to reset it.
Important
You can't move the managed domain to a different subscription, resource group, region, virtual network, or subnet after you create it. Take care to select the most appropriate subscription, resource group, region, virtual network, and subnet when you deploy the managed domain.
In this tutorial, you create and configure the managed domain using the Azure portal. To get started, first sign in to the Azure portal.
Create a managed domain
To launch the Enable Azure AD Domain Services wizard, complete the following steps:
- On the Azure portal menu or from the Home page, select Create a resource.
- Enter Domain Services into the search bar, then choose Azure AD Domain Services from the search suggestions.
- On the Azure AD Domain Services page, select Create. The Enable Azure AD Domain Services wizard is launched.
- Select the Azure Subscription in which you would like to create the managed domain.
- Select the Resource group to which the managed domain should belong. Choose to Create new or select an existing resource group.
When you create a managed domain, you specify a DNS name. There are some considerations when you choose this DNS name:
- Built-in domain name: By default, the built-in domain name of the directory is used (a .onmicrosoft.com suffix). If you wish to enable secure LDAP access to the managed domain over the internet, you can't create a digital certificate to secure the connection with this default domain. Microsoft owns the .onmicrosoft.com domain, so a Certificate Authority (CA) won't issue a certificate.
- Custom domain names: The most common approach is to specify a custom domain name, typically one that you already own and is routable. When you use a routable, custom domain, traffic can correctly flow as needed to support your applications.
- Non-routable domain suffixes: We generally recommend that you avoid a non-routable domain name suffix, such as contoso.local. The .local suffix isn't routable and can cause issues with DNS resolution.
Tip
If you create a custom domain name, take care with existing DNS namespaces. It's recommended to use a domain name separate from any existing Azure or on-premises DNS name space.
For example, if you have an existing DNS name space of contoso.com, create a managed domain with the custom domain name of aaddscontoso.com. If you need to use secure LDAP, you must register and own this custom domain name to generate the required certificates.
You may need to create some additional DNS records for other services in your environment, or conditional DNS forwarders between existing DNS name spaces in your environment. For example, if you run a webserver that hosts a site using the root DNS name, there can be naming conflicts that require additional DNS entries.
In these tutorials and how-to articles, the custom domain of aaddscontoso.com is used as a short example. In all commands, specify your own domain name.
The following DNS name restrictions also apply:
- Domain prefix restrictions: You can't create a managed domain with a prefix longer than 15 characters. The prefix of your specified domain name (such as aaddscontoso in the aaddscontoso.com domain name) must contain 15 or fewer characters.
- Network name conflicts: The DNS domain name for your managed domain shouldn't already exist in the virtual network. Specifically, check for the following scenarios that would lead to a name conflict:
- If you already have an Active Directory domain with the same DNS domain name on the Azure virtual network.
- If the virtual network where you plan to enable the managed domain has a VPN connection with your on-premises network. In this scenario, ensure you don't have a domain with the same DNS domain name on your on-premises network.
- If you have an existing Azure cloud service with that name on the Azure virtual network.
Complete the fields in the Basics window of the Azure portal to create a managed domain:
Enter a DNS domain name for your managed domain, taking into consideration the previous points.
Choose the Azure Location in which the managed domain should be created. If you choose a region that supports Azure Availability Zones, the Azure AD DS resources are distributed across zones for additional redundancy.
Tip
Availability Zones are unique physical locations within an Azure region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there's a minimum of three separate zones in all enabled regions.
There's nothing for you to configure for Azure AD DS to be distributed across zones. The Azure platform automatically handles the zone distribution of resources. For more information and to see region availability, see What are Availability Zones in Azure?
The SKU determines the performance and backup frequency. You can change the SKU after the managed domain has been created if your business demands or requirements change. For more information, see Azure AD DS SKU concepts.
For this tutorial, select the Standard SKU.
A forest is a logical construct used by Active Directory Domain Services to group one or more domains. By default, a managed domain is created as a User forest. This type of forest synchronizes all objects from Azure AD, including any user accounts created in an on-premises AD DS environment.
A Resource forest only synchronizes users and groups created directly in Azure AD. For more information on Resource forests, including why you may use one and how to create forest trusts with on-premises AD DS domains, see Azure AD DS resource forests overview.
For this tutorial, choose to create a User forest.
To quickly create a managed domain, you can select Review + create to accept additional default configuration options. The following defaults are configured when you choose this create option:
- Creates a virtual network named aadds-vnet that uses the IP address range of 10.0.2.0/24.
- Creates a subnet named aadds-subnet using the IP address range of 10.0.2.0/24.
- Synchronizes All users from Azure AD into the managed domain.
Select Review + create to accept these default configuration options.
Deploy the managed domain
On the Summary page of the wizard, review the configuration settings for your managed domain. You can go back to any step of the wizard to make changes. To redeploy a managed domain to a different Azure AD tenant in a consistent way using these configuration options, you can also Download a template for automation.
To create the managed domain, select Create. A note is displayed that certain configuration options such as DNS name or virtual network can't be changed once the Azure AD DS managed has been created. To continue, select OK.
The process of provisioning your managed domain can take up to an hour. A notification is displayed in the portal that shows the progress of your Azure AD DS deployment. Select the notification to see detailed progress for the deployment.
The page will load with updates on the deployment process, including the creation of new resources in your directory.
Select your resource group, such as myResourceGroup, then choose your managed domain from the list of Azure resources, such as aaddscontoso.com. The Overview tab shows that the managed domain is currently Deploying. You can't configure the managed domain until it's fully provisioned.
When the managed domain is fully provisioned, the Overview tab shows the domain status as Running.
Important
The managed domain is associated with your Azure AD tenant. During the provisioning process, Azure AD DS creates two Enterprise Applications named Domain Controller Services and AzureActiveDirectoryDomainControllerServices in the Azure AD tenant. These Enterprise Applications are needed to service your managed domain. Don't delete these applications.
Update DNS settings for the Azure virtual network
With Azure AD DS successfully deployed, now configure the virtual network to allow other connected VMs and applications to use the managed domain. To provide this connectivity, update the DNS server settings for your virtual network to point to the two IP addresses where the managed domain is deployed.
The Overview tab for your managed domain shows some Required configuration steps. The first configuration step is to update DNS server settings for your virtual network. Once the DNS settings are correctly configured, this step is no longer shown.
The addresses listed are the domain controllers for use in the virtual network. In this example, those addresses are 10.0.2.4 and 10.0.2.5. You can later find these IP addresses on the Properties tab.
To update the DNS server settings for the virtual network, select the Configure button. The DNS settings are automatically configured for your virtual network.
Tip
If you selected an existing virtual network in the previous steps, any VMs connected to the network only get the new DNS settings after a restart. You can restart VMs using the Azure portal, Azure PowerShell, or the Azure CLI.
Enable user accounts for Azure AD DS
To authenticate users on the managed domain, Azure AD DS needs password hashes in a format that's suitable for NT LAN Manager (NTLM) and Kerberos authentication. Azure AD doesn't generate or store password hashes in the format that's required for NTLM or Kerberos authentication until you enable Azure AD DS for your tenant. For security reasons, Azure AD also doesn't store any password credentials in clear-text form. Therefore, Azure AD can't automatically generate these NTLM or Kerberos password hashes based on users' existing credentials.
Note
Once appropriately configured, the usable password hashes are stored in the managed domain. If you delete the managed domain, any password hashes stored at that point are also deleted.
Synchronized credential information in Azure AD can't be re-used if you later create a managed domain - you must reconfigure the password hash synchronization to store the password hashes again. Previously domain-joined VMs or users won't be able to immediately authenticate - Azure AD needs to generate and store the password hashes in the new managed domain.
[Azure AD Connect Cloud Sync is not supported with Azure AD DS][/azure/active-directory/cloud-sync/what-is-cloud-sync#comparison-between-azure-ad-connect-and-cloud-sync]. On-premises users need to be synced using Azure AD Connect in order to be able to access domain-joined VMs. For more information, see Password hash sync process for Azure AD DS and Azure AD Connect..
In this tutorial, let's work with a basic cloud-only user account. For more information on the additional steps required to use Azure AD Connect, see Synchronize password hashes for user accounts synced from your on-premises AD to your managed domain.
Tip
If your Azure AD tenant has a combination of cloud-only users and users from your on-premises AD, you need to complete both sets of steps.
For cloud-only user accounts, users must change their passwords before they can use Azure AD DS. This password change process causes the password hashes for Kerberos and NTLM authentication to be generated and stored in Azure AD. The account isn't synchronized from Azure AD to Azure AD DS until the password is changed. Either expire the passwords for all cloud users in the tenant who need to use Azure AD DS, which forces a password change on next sign-in, or instruct cloud users to manually change their passwords. For this tutorial, let's manually change a user password.
Before a user can reset their password, the Azure AD tenant must be configured for self-service password reset.
To change the password for a cloud-only user, the user must complete the following steps:
Go to the Azure AD Access Panel page at.
In the top-right corner, select your name, then choose Profile from the drop-down menu.
On the Profile page, select Change password.
On the Change password page, enter your existing (old) password, then enter and confirm a new password.
Select Submit.
It takes a few minutes after you've changed your password for the new password to be usable in Azure AD DS and to successfully sign in to computers joined to the managed domain.
Next steps
In this tutorial, you learned how to:
- Understand DNS requirements for a managed domain
- Create a managed domain
- Add administrative users to domain management
- Enable user accounts for Azure AD DS and generate password hashes
Before you domain-join VMs and deploy applications that use the managed domain, configure an Azure virtual network for application workloads. | https://docs.microsoft.com/en-us/azure/active-directory-domain-services/tutorial-create-instance | 2021-10-16T01:19:51 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.microsoft.com |
This section shows how to set up the database service for this use case. These instructions are included solely for the purpose of taking you through the implementation of this specific use case. For setup and conceptual information on the service, refer to
Amazon Relational Database Service
documentation.
For the topology and solution details, see
Use Case: Deploy the VM-Series Firewalls to Secure Highly Available Internet-Facing Applications in AWS
and
Solution Overview—Secure Highly Available Internet-Facing Applications. | https://docs.paloaltonetworks.com/vm-series/7-1/vm-series-deployment/set-up-the-vm-series-firewall-in-aws/set-up-the-amazon-relational-database-service-rds.html | 2021-10-15T23:24:46 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.paloaltonetworks.com |
-V-V security tag that corresponds to the NSX payload format you chose in 3. Each of the predefined NSX-V payload formats corresponds to an NSX-V security tag. To view the NSX-V security tags in NSX-V, select.Networking & SecurityNSX ManagersNSX Manager IPManageSecurity TagsIn this example,NSX Config-sync underand reboot the PA-VM to resolve this issue.PanoramaVMwareNSX-V. | https://docs.paloaltonetworks.com/vm-series/9-0/vm-series-deployment/set-up-the-vm-series-firewall-on-nsx/set-up-the-vm-series-firewall-on-vmware-nsx/dynamically-quarantine-infected-guests.html | 2021-10-15T23:58:57 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.paloaltonetworks.com |
Office365
If your organization uses Outlook via Microsoft Office365 to manage email, you can use the Office365 plugin to access Outlook mailboxes. Use this plugin to monitor incoming messages to find suspicious email, trigger other actions, forward email for further analysis, and many other email actions. Find the full list of available triggers or actions in the plugin documentation.
When setting up an Office 365 connection for the first time, you must register a new application in Microsoft Azure Active Directory.
New Azure Experience
If you are using Azure after November 2018, you may have access to two App Registration experiences. The instructions below are split into notes on the legacy experience and the new "Preview" experience. Please read carefully to make sure the instructions match the experience you are using.
Collect Configuration Information
Before you configure an Office365 connection, you will need to collect these parameters from the Microsoft Azure Active Directory:
- Directory ID
- Application ID
- Secret Key
We recommend copying and pasting these values into a temporary document while you collect them, as you will need to enter them into InsightConnect later.
To find parameter information in the Azure Portal:
- Log into your Azure Portal at.
- In the side navigation of the Azure Portal, click Azure Active Directory, then Properties in the secondary navigation menu.
- Copy and save the Directory ID on the Properties page.
- In the secondary navigation menu, click App Registrations, then + New Application Registration. If you are using Microsoft Azure's new App Registrations experience, this button may be called + New Registration.
- Complete the form with
InsightConnectfor the name, "Web app/API" for the application type, and the sign-on URL. Then click Create.
- Save the application registration to Azure, then copy and save the Application ID.
- Click Settings for the newly registered application, then click Keys in the menu that appears. If you are using Microsoft Azure's new App Registrations experience, this tab may be called Certificates & Secrets instead.
- Create a new key using the steps below.
Create a New Key
Azure uses asymmetric keys to authenticate and secure communications with other applications. You will need to create a new key in Azure to use for configuring InsightConnect connections.
To create a new key in the legacy Azure experience:
- Navigate to the "Keys" page in the Azure Portal using the steps in the Collect Configuration Information section.
- In the "Passwords" section, enter a key description or name for the key you will create. You can name the key "InsightConnect" or follow any naming schemes you use.
- Choose a duration for the life of the key. It will expire when the duration you set ends. Then click Save at the top of the page.
- Azure will generate a value for your new key. Copy and save this string now, as you will not be able to retrieve it after you leave this page in the Azure Portal.
To create a new key in the new Azure experience (11/2018-onward):
- Navigate to the "Certificates & Secrets" page in the Azure Portal using the steps in (doc:office365#collect-configuration-information-in-azure).
- In the "Client Secrets" section, click + New Client Secret to create a new key.
- Give the key a description -- this also serves as your key's name.
- Choose a duration for the life of the key. It will expire when the duration you set ends. Then click Add.
- Azure will generate a value for your new key. Copy and save this string now, as you will not be able to retrieve it after you leave this page in the Azure Portal.
Secret Key
The secret key will only be displayed once! Make sure to copy and paste it now and keep it with the Directory ID and Application ID you gathered earlier. You will need all of these to configure Office365 in InsightConnect.
Configure Application Permissions
After registering InsightConnect in Azure, you will need to configure every permission needed for Office365 to successfully provide data to InsightConnect. You won’t need all the permissions available in Azure for accessing Outlook.
The permissions needed to monitor and send email are:
- Read and write user mailbox settings
- Read User mailbox settings
- Read user mail
- Read and write access to user mail
- Send Mail as a user
- Read and write mail in all mailboxes
- Read all user mailbox settings
- Read Mail in all mailboxes
- Read and Write all user You may also see these settings in a format like
Mail.Reador
Mail.ReadWrite. Make sure to select all permissions matching the list above.
To configure application permissions in the legacy Azure experience:
- Click on Required Permissions in the Settings panel for your app registration.
- Click + Add at the top of the page.
- Click Select an API, then Microsoft Graph. Click Select to continue.
- Select any and all permissions that you need to provision to InsightConnect in order to use the Office365 plugin actions. You should have the permissions above selected.
- Click Select and then Done once you are satisfied with your permissions.
- You should now be back on the "Required Permissions" page. Click Grant Permissions at the top of the page, then Yes.
- When the success notification appears in the top right, Office365 is ready for use with InsightConnect.
To configure application permissions in the new Azure experience (11/2018-onward):
- Click on the API Permissions tab for your app registration.
- Click + Add a Permission.
- Choose Microsoft Graph. It will most likely be the first and largest button on the "Select an API" page.
- Click Application Permissions.
- Select any and all permissions that you need to provision to InsightConnect in order to use the Office365 plugin actions. You will most likely need to select permissions from the "Directory," "MailboxSettings," "Mail," and "User" categories, so thoroughly check that the permissions you grant to InsightConnect correspond to the plugin actions listed in the Overview of Office365 plugin documentation. You should have the permissions above selected.
- When the "Admin Consent Required" column for Microsoft Graph says "Granted for test," Office365 is ready for use with InsightConnect. Contact Microsoft support if you have issues with admin consent.
Conflicting Permissions
If you're still having trouble getting Office 365 set up, make sure you do not have Mail.ReadBasic or Mail.ReadBasic.All set in your application permissions. These will override some of the other permissions and prevent the plugin from getting attachments or reading from mail subfolders.
Configure a New Office365 Connection in InsightConnect
After you collect the information above in your Azure Portal, you can configure connections to Office365 in InsightConnect. Configure the connection name, orchestrator, and credentials as you normally would.
To configure the parameters for an Office365 connection:
- Collect all parameter information you obtained from the Azure Portal.
- Choose a credential to use with Office365. The first time you create a credential for Office365, you will be prompted to name the credential and add a Secret Key. Paste the Azure Private Key Value here. Otherwise, if you choose an existing credential, the Secret Key field will automatically populate with the information from that credential.
- Paste your Directory ID into the "Tenant ID" field.
- Paste your App ID into the "App ID" field.
- Click Continue to configure the rest of the workflow trigger or step.
- InsightConnect will automatically run a test for the connection. Learn more here.
Troubleshoot the Office365 Plugin
If you are having problems configuring your Office365 plugin, find solutions to common problems here.
Office365 will not authorize
If your Office365 plugin fails to authorize, check the plugin’s error logs. If you see anything like the following, you did not receive an authorization token after configuring an Office365 connection.
1Updating auth token…2Auth request: <Response [400]>3{"error":"invalid_request","error_description":" …
It is likely that the connection settings are invalid. Verify that the App ID, Tenant ID, and Secret Key are correct in your Office365 connections.
Office365 trigger does not work
If a correctly configured Office365 trigger fails to run a workflow when you know it should, it is likely that the permissions in Office365 are incorrect. Verify that your application has the
Mail.Read and
Mail.ReadWrite permissions selected in Azure Active Directory. Also, verify that the administrator of your tenant consented to the correct application permissions. More information can be found at these Microsoft resources:
Known Limitations
A maximum of 10 items per mailbox can be removed at one time. Because the capability to search for and remove messages is intended to be an incident-response tool, this limit helps ensure that messages are quickly removed from mailboxes.
The maximum number of mailboxes in a Content Search that you can delete items in by doing a search and purge action is 50,000. If the Content Search has more than 50,000 source mailboxes, the delete action will fail. Visit for more information. | https://docs.rapid7.com/insightconnect/office365/ | 2021-10-15T23:14:18 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['/api/docs/file/product-documentation__master/insightconnect/images/Screen Shot 2019-08-14 at 2.49.54 PM.png',
None], dtype=object) ] | docs.rapid7.com |
Displays the data types of the columns defined by a particular hash index.
ANSI Compliance
This statement is a Teradata extension to the ANSI SQL:2011 standard.
Required Privileges
You must either own the table on which the hash index is defined or have at least one privilege on that table.
Use the SHOW privilege to enable a user to perform HELP or SHOW requests only for a specified hash index. | https://docs.teradata.com/r/76g1CuvvQlYBjb2WPIuk3g/h6HVqSy9KPITveREUtngUQ | 2021-10-16T00:17:03 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.teradata.com |
In Files
- openssl/ossl.c
Namespace
- CLASS OpenSSL::OCSP::BasicResponse
- CLASS OpenSSL::OCSP::CertificateId
- CLASS OpenSSL::OCSP::OCSPError
- CLASS OpenSSL::OCSP::Request
- CLASS OpenSSL::OCSP::Response::OCSP
OpenS:] from the basic response. (You can check multiple certificates in a request, but for this example we only submitted one.)
response_certificate_id, status, reason, revocation_time, this_update, next_update, extensions = basic_response.status
Then check the various fields.
unless response_certificate_id == certificate_id then raise 'certificate id mismatch' end now = Time.now if this_update > now then raise 'update date is in the future' end if now > next_update then raise 'next update time has passed' end
Constants
- NOCASIGN
(This flag is not used by OpenSSL 1.0.1g)
- NOCERTS
Do not include certificates in the response
- NOCHAIN
Do not verify the certificate chain on the response
- NOCHECKS
Do not make additional signing certificate checks
- NODELEGATED
(This flag is not used by OpenSSL 1 has. | http://docs.activestate.com/activeruby/beta/ruby/stdlib/libdoc/openssl/rdoc/OpenSSL/OCSP.html | 2019-05-19T08:45:37 | CC-MAIN-2019-22 | 1558232254731.5 | [array(['../images/find.png', 'show/hide quicksearch [+]'], dtype=object)] | docs.activestate.com |
Maps API¶
The Maps API provides a node.js based API that allows you to generate maps based on data hosted in your CartoDB account by applying custom SQL and CartoCSS to the data
Like the other components of CartoDB is Open Source and you can find the source code at CartoDB/Windshaft-cartodb
You can find usage documentation at
Although you can chechout any branch of the repository most of them are usually work in progress that is not guaranteed to work. In order to run a production ready Maps API service you need to use master branch. | https://cartodb.readthedocs.io/en/v4.11.131/components/maps-api.html | 2019-05-19T09:43:35 | CC-MAIN-2019-22 | 1558232254731.5 | [] | cartodb.readthedocs.io |
HTTP Header Authentication¶
With web servers such as NGINX or others you can perform SSO by making the web server add a trusted, safe header to every request sent to CartoDB. Example:
User browser –
GET –> NGINX (adds
'sso-user-email': '[email protected]' header) –> CartoDB server
You can enable HTTP Header Authentication at CartoDB by adding the following to
app_conf.yml (taken from
app_conf.yml.sample):
http_header_authentication: header: # name of the trusted, safe header that your server adds to the request field: # 'email' / 'username' / 'id' / 'auto' (autodetection) autocreation: # true / false (true requires field to be email)
Configuration for the previous example:
http_header_authentication: header: 'sso-user-email' field: 'email' autocreation: false
Autocreation¶
Even more, if you want not only authentication (authenticating existing users) but also user creation you can turn
autocreation on by setting
autocreation: true. If you do so, when a user with the trusted header performs his first request his user will be created automatically. This feature requires that
field is set to
[email protected]).
username: user of the email (
alice).
password: random. He can change it in his account page.
organization: taken from the subdomain (
myorg). | https://cartodb.readthedocs.io/en/v4.11.131/operations/http_headers_authentication.html | 2019-05-19T09:45:44 | CC-MAIN-2019-22 | 1558232254731.5 | [] | cartodb.readthedocs.io |
AirPlay Permissions
AirPlay Permissions allow you to map one or more mobile devices to an AirPlay destination, such as an Apple TV, so that those mapped mobile devices can be automatically paired with the AirPlay destination. When a mobile device is mapped to an AirPlay destination via AirPlay Permissions, you can also choose to automatically give the mobile device the password for the AirPlay destination, or to make only the permitted AirPlay destinations available to that device.
Mobile Device Inventory Field Mapping
When configuring AirPlay Permissions, you must choose a mobile device inventory field to use to map devices to permitted AirPlay destinations. The inventory field you choose is automatically mapped to an AirPlay destination when the value in that field is the same for both the mobile device and the AirPlay destination device.
Requirements
To use AirPlay Permissions, you need:
Mobile devices with iOS 8 or later
Apple TV devices enrolled with Jamf Pro
Creating an AirPlay Permission
Log in to Jamf Pro.
In the top-right corner of the page, click Settings
.
Click Global Management.
Click AirPlay Permissions
.
Click New
.
Enter a display name for the AirPlay Permission.
Select the inventory field from the Mapping Field pop-up menu.
(Optional) Enable settings for restricting AirPlay destinations and automating passwords, as needed.
Click Save.
Repeat this process for each new AirPlay Permission you want to create.
The mobile devices and AirPlay destinations that share the selected inventory field are mapped immediately. | https://docs.jamf.com/10.10.0/jamf-pro/administrator-guide/AirPlay_Permissions.html | 2019-05-19T09:18:08 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.jamf.com |
UI.
Kentico consists of modules. Modules contain UI elements. A UI element can represent one of the following:
- application
- tab
- menu item
- group of controls on a page
For each of the UI elements, you can say whether you want users in a particular role to see the UI element or not.
UI personalization and administrators
UI personalization does not apply to users who have the Administrator or Global administrator privilege level. Administrators always have access to all UI elements, regardless of the system's UI personalization settings.
UI personalization vs. Permissions.
Learn more about permissions and UI personalization
Enabling UI personalization
UI personalization is disabled by default. To start using UI personalization:
- Open the Settings application.
- Search for "UI personalization" or click the Security & Membership category.
- Select the Enable UI personalization checkbox.
- Save the settings.
The system enables UI personalization for the selected site. Users on the selected site see the administration interface according to the configured restrictions.
If you want all Kentico users to have full access to the administration interface without worrying about the UI personalization settings, keep the Enable UI personalization setting disabled.
Configuring visibility of UI elements
Kentico allows you to show or hide UI elements based on user roles.
- Open the UI personalization application.
- On the Administration tab, select a site and a role. Selecting a module is optional.
- Browse the UI element tree and select or clear the check boxes that represent the parts of the UI that you want to show or hide.
The system automatically saves the settings as you select or clear check boxes in the UI element tree. The system hides the parts of the UI that have their check box cleared from users in the selected role. If a user tries to access such UI element, the system displays an access denied message.
If a user is a member of multiple roles, they're allowed to see UI elements from all their roles combined..
Was this page helpful? | https://docs.kentico.com/k12/managing-users/ui-personalization | 2019-05-19T09:51:46 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.kentico.com |
QOS structure
The QOS structure provides the means by which QOS-enabled applications can specify quality of service parameters for sent and received traffic on a particular flow.
Syntax
typedef struct _QualityOfService { FLOWSPEC SendingFlowspec; FLOWSPEC ReceivingFlowspec; WSABUF ProviderSpecific; } QOS, *LPQOS;
SendingFlowspec
Specifies QOS parameters for the sending direction of a particular flow. SendingFlowspec is sent in the form of a FLOWSPEC structure.
ReceivingFlowspec
Specifies QOS parameters for the receiving direction of a particular flow. ReceivingFlowspec is sent in the form of a FLOWSPEC structure.
ProviderSpecific
Pointer to a structure of type WSABUF that can provide additional provider-specific quality of service parameters to the RSVP SP for a given flow.
Remarks
Most applications can fulfill their quality of service requirements without using the ProviderSpecific buffer. However, if the application must provide information not available with standard Windows 2000 QOS parameters, the ProviderSpecific buffer allows the application to provide additional parameters for RSVP and/or traffic control. | https://docs.microsoft.com/en-us/windows/desktop/api/winsock2/ns-winsock2-_qualityofservice | 2019-05-19T08:42:23 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.microsoft.com |
Depending on your level of familiarity with the Shared Research Computing Clusters, you may prefer to submit your MATLAB jobs in one of several different ways.
Easy: Run from your desktop's MATLAB environment
This is the best way to get started with minimal learning curve with respect to the cluster environment.
- Apply for and be approved for a cluster account (choose SUGAR, STIC, and DAVinCI for your clusters)
- Check your Parallel Computing Toolbox: Click on the "Parallel" menu and select "Manage Cluster Profiles..." You should see only "local" under the Cluster Profile list (If you've previously imported other profiles, just make sure none of them are labeled DAVinCI, STIC, or SUGAR.)
- Run the following code in MATLAB (copy and paste into the Command Window)
Now you should have a set of cluster profiles under your Parallel menu for each of the clusters. On first use, MATLAB will prompt you for credentials to access). | https://docs.rice.edu/confluence/pages/viewpage.action?pageId=45220068 | 2019-05-19T09:42:00 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.rice.edu |
Troubleshooting PGs¶
Placement Groups Never Get Clean¶
When you create a cluster and your cluster remains in
active,
active+remapped or
active+degraded status and never achieve an
active+clean status, you likely have a problem with your configuration.
You may need to review settings in the Pool, PG and CRUSH Config Reference and make appropriate adjustments.
As a general rule, you should run your cluster with more than one OSD and a pool size greater than 1 object replica.
One Node Cluster¶
Ceph no longer provides documentation for operating on a single node, because you would never deploy a system designed for distributed computing on a single node. Additionally, mounting client kernel modules on a single node containing a Ceph daemon may cause a deadlock due to issues with the Linux kernel itself (unless you use VMs for the clients). You can experiment with Ceph in a 1-node configuration, in spite of the limitations as described herein.
If you are trying to create a cluster on a single node, you must will try to peer the PGs of one OSD with the PGs of another OSD on
another node, chassis, rack, row, or even datacenter depending on the setting.
Tip-deploy osd create --data {disk} {host}
Fewer OSDs than Replicas¶
If you have brought up two OSDs to an
up and
in state, but you still
don.
Note
You can make the changes at runtime. If you make the changes in your Ceph configuration file, you may need to restart your cluster.
Pool Size = 1¶
If you have the
osd pool default size set to
1, you will only have
one copy of the object. OSDs rely on other OSDs to tell them which objects
they should have. If a first osd force-create-pg <pgid>
Stuck Placement Groups¶
It is normal for placement groups to enter states like “degraded” or “peering” following a failure. Normally, we check for:
inactive- The placement group has not been
activefor too long (i.e., it hasn’t been able to service read/write requests).
unclean- The placement group has not been
cleanfor too long (i.e., it hasn’t been able to completely recover from a previous failure).
stale- The placement group status has not been updated by a
ceph-osd, indicating that all nodes storing this placement group may be
down.
You can explicitly list stuck placement groups with one of:
ceph pg dump_stuck stale ceph pg dump_stuck inactive ceph pg dump_stuck unclean
For stuck
stale placement groups, it is normally a matter of getting the
right
ceph-osd daemons running again. For stuck
inactive placement
groups, it is usually a peering problem (see Placement Group Down - Peering Failure). For
stuck
unclean placement groups, there is usually something preventing
recovery from completing, like unfound objects (see
Unfound Objects);
Placement Group Down - Peering Failure¶
In certain cases, the
ceph-osd Peering process can run into
problems, preventing a PG from becoming active and usable. For
example,
ceph health might report:
We can query the cluster to determine exactly why the PG is marked
down with:"} ] }
The
recovery_state section tells us that peering is blocked due to
down
ceph-osd daemons, specifically
osd.1. In this case, we can start that
ceph-osd
and things will recover.
Alternatively, if there is a catastrophic failure of
osd.1 (e.g., disk
failure), we can tell the cluster that it is
lost and to cope as
best it can.
Important
This is dangerous in that the cluster cannot guarantee that the other copies of the data are consistent and up to date.
To instruct Ceph to continue anyway:
ceph osd lost 1
Recovery will proceed.
Unfound Objects¶
Under certain combinations of failures Ceph may complain about
unfound objects:
ceph health detail HEALTH_WARN 1 pgs degraded; 78/3778 unfound (2.065%) pg 2.4 is active+degraded, 78 unfound
This means that the storage cluster knows that some objects (or newer copies of existing objects) exist, but it hasn.
Now 1 knows that these object exist, but there is no live
ceph-osd who
has a copy. In this case, IO to those objects will block, and the
cluster will hope that the failed node comes back soon; this is
assumed to be preferable to returning an IO error to the user.
First, you can identify which objects are unfound with: will be true and you can query for more. (Eventually the
command line tool will hide this from you, but not yet.)
Second, you can identify which OSDs have been probed or might contain data: simply won’t consider the long-departed ceph-osd as a potential location to consider. (This scenario, however, is unlikely.) pg 2.5 mark_unfound_lost revert|delete
This the final argument specifies how the cluster should deal with lost objects.
The “delete” option will forget about them entirely.
The “revert” option (not available for erasure coded pools) will either roll back to a previous version of the object or (if it was a new object) forget about it entirely. Use this with caution, as it may confuse applications that expected the object to exist.
Homeless Placement Groups¶
It is possible for all OSDs that had copies of a given placement groups to fail.
If that’s the case, that subset of the object store is unavailable, and the
monitor will receive no status updates for those placement groups. To detect
this situation, the monitor marks any placement group whose primary OSD has
failed as
stale. For example:
ceph health HEALTH_WARN 24 pgs stale; 3/300 in osds are down
You can identify which placement groups are
stale, and what the last OSDs to
store them were, with:
If we want to get placement group 2.5 back online, for example, this tells us that
it was last managed by
osd.0 and
osd.2. Restarting those
ceph-osd
daemons will allow the cluster to recover that placement group (and, presumably,
many others).
Only a Few OSDs Receive Data¶
If you have many nodes in your cluster and only a few of them receive data, check the number of placement groups in your pool. Since placement groups get mapped to OSDs, a small number of placement groups will not distribute across your cluster. Try creating a pool with a placement group count that is a multiple of the number of OSDs. See Placement Groups for details. The default placement group count for pools is not useful, but you can change it here.
Can’t Write Data¶
If your cluster is up, but some OSDs are down and you cannot write data,
check to ensure that you have the minimum number of OSDs running for the
placement group. If you don’t have the minimum number of OSDs running,
Ceph will not allow you to write data because there is no guarantee
that Ceph can replicate your data. See
osd pool default min size
in the Pool, PG and CRUSH Config Reference for details.
PGs Inconsistent¶
If you receive an
active + clean + inconsistent state, this may happen
due to an error during scrubbing. As always, we can identify the inconsistent
placement group(s) with:
$ ceph health detail HEALTH_ERR 1 pgs inconsistent; 2 scrub errors pg 0.6 is active+clean+inconsistent, acting [0,1,2] 2 scrub errors
Or if you prefer inspecting the output in a programmatic way:
$ rados list-inconsistent-pg rbd ["0.6"]
There is only one consistent state, but in the worst case, we could have
different inconsistencies in multiple perspectives found in more than one
objects. If an object named
foo in PG
0.6 is truncated, we will have:
$:
- The only inconsistent object is named
foo, and it is its head that has inconsistencies.
- The inconsistencies fall into two categories:
errors: these errors indicate inconsistencies between shards without a determination of which shard(s) are bad. Check for the
errorsin
shardsarray. The
errorsare set for the given shard that has the problem. They include errors like
read_error. The
errorsending in
oiindicate a comparison with
selected_object_info. Look at the
shardsarray.
You can repair the inconsistent placement group by executing:
ceph pg repair {placement-group-ID}
Which. So, please. See The Network Time Protocol and Ceph
Clock Settings for additional details.
Erasure Coded PGs are not active+clean¶
When CRUSH fails to find enough OSDs to map to a PG, it will show as a
2147483647 which is ITEM_NONE or
no OSD found. For instance:
[2,1,6,0,5,8,2147483647,7,4]
Not enough OSDs¶
If the Ceph cluster only has 8 OSDs and the erasure coded pool needs 9, that is what it will show. You can either create another erasure coded pool that requires less OSDs:
ceph osd erasure-code-profile set myprofile k=5 m=3 ceph osd pool create erasurepool 16 16 erasure myprofile
or add a new OSDs and the PG will automatically use them.
CRUSH constraints cannot be satisfied¶ (“dumping”) the rule:
$"}]}
You can resolve the problem by creating a new pool in which PGs are allowed to have OSDs residing on the same host with:
ceph osd erasure-code-profile set myprofile crush-failure-domain=osd ceph osd pool create erasurepool 16 16 erasure myprofile
CRUSH gives up too soon¶_triesto a value greater than the default.
You should first verify the problem with
crushtool after
extracting the crushmap from the cluster so your experiments do not
modify the Ceph cluster and only work on a local files:
$ will try mapping
one million values (i.e. the range defined by
[--min-x,--max-x])
and must display at least one bad mapping. If it outputs nothing it
means all mappings are successful and you can stop right there: the
problem is elsewhere.
The CRUSH rule can be edited by decompiling the crush map:
$ crushtool --decompile crush.map > crush.txt
and adding:
$ crushtool --compile crush.txt -o better-crush.map
When all mappings succeed, an histogram of the number of tries that
were necessary to find all of them can be displayed with the
--show-choose-tries option of
crushtool:
$ took). | http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/ | 2019-05-19T09:41:00 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.ceph.com |
High Fidelity is an open-source software where you can create and share virtual reality (VR) experiences. You can create and host your own VR world, explore other worlds, meet and connect with other users, attend or host live VR events and much more.
The High Fidelity metaverse provides built-in social features, including avatar interactions, spatialized audio and interactive physics. Additionally, you have the ability to import any 3D object into your virtual environment. No matter where you go in High Fidelity, you will always be able to interact with your environment, engage with your friends, and listen to conversations just like you would in real life.
What can I do?¶
You have the power to shape your VR experience in High Fidelity.
- EXPLORE by hopping between domains in the metaverse, shop the Marketplace, attend events and check out what others are up to!
- CREATE personal experiences by building avatars, domains, tablet apps, and more for you and others to enjoy.
- SCRIPT and express your creativity by applying advanced scripting concepts to entities and avatars in the metaverse.
- HOST and make immersive experiences to educate, entertain, and connect with your audience.
- SELL your creations to others and make money in the metaverse using the High Fidelity Marketplace.
- CONTRIBUTE to our endeavor by browsing our source code on GitHub. | http://docs.highfidelity.com/en/rc81/ | 2019-05-19T08:30:43 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.highfidelity.com |
Matlab coding style guide¶
Introduction¶
This document gives coding conventions for Matlab code as part of the Two!Ears Auditory Model.
Code is read much more often than it is written. The guidelines provided here are intended to improve the readability of code and make it consistent across the whole Two!Ears Auditory Model. Both points are, besides a good documentation, also a big part of the impression our software gives to other users.
These guidelines are introduced at a stage where we have already written code and you are of course not forced to rewrite the existing code. For your existing code the following points should be considered:
- Make sure your old code includes a function/class header for documentation
- If you have to read code that you have not written yourself and which does not comply to the guidelines presented here, you could create an issue if you are not able to understand the code
Documentation.
Naming
Layout ];
Credits¶
This document was inspired by MATLAB Style Guidelines 2.0 and PEP 8.
ildValue = 10; % ild value | http://docs.twoears.eu/en/1.3/dev/coding-style-guide/ | 2019-05-19T08:57:03 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.twoears.eu |
Configuration¶
In this section you can find some helpful configuration examples related with Basemaps, Domainles Urls and Common-data.
Basemaps¶
The way to add/change the basemaps available in CartoDB is chaging the config/app_config.yml. Basically you need to add a new entry called basemaps, that entry can have different sections and each section one or more basemaps.
Each section corresponds to row in CartoDB basemap dialog. If the basemaps entry is not present a set of default basemaps will be used (CartoDB and Stamen ones, check the default basemaps file)
Also, it’s always necessary to have a default basemap among all the confifured ones in the app_config.yml. The way to set a basemap as default a “default” attribute needs to be added to the basemap. There can be several basemaps in the config with the attribute default set, however, only the first one found in the same order than in the app_config will be used as default.
Here is an example config.yml:
basemaps: CartoDB: positron_rainbow: default: true # Ident with spaces not with tab>' dark_matter_rainbow: url: 'http://{s}.basemaps.cartocdn.com/dark_all/{z}/{x}/{y}.png' subdomains: 'abcd' minZoom: '0' maxZoom: '18' name: 'Dark matter' className: 'dark_matter_rainbow' attribution: '© <a href="">OpenStreetMap</a> contributors © <a href="">CARTO</a>' positron_lite_rainbow: url: 'http://{s}.basemaps.cartocdn.com/light_nolabels/{z}/{x}/{y}.png' subdomains: 'abcd' minZoom: '0' maxZoom: '18' name: 'Positron (lite)' className: 'positron_lite_rainbow' attribution: '© <a href="">OpenStreetMap</a> contributors © <a href="">CARTO</a>' stamen: toner_stamen: url: '-{s}.a.ssl.fastly.net/toner/{z}/{x}/{y}.png' subdomains: 'abcd' minZoom: '0' maxZoom: '18' name: 'Toner' className: 'toner_stamen' attribution: 'Map tiles by <a href="">Stamen Design</a>, under <a href="">CC BY 3.0</a>. Data by <a href="">OpenStreetMap</a>, under <a href="">ODbL</a>.'
Basemaps with a layer of labels¶
Basemaps can optionally add a layer with labels on top of other layers. To do so, you should add the labels key to the basemap config, as follows:
positron_rainbow: default: true>' labels: url: 'http://{s}.basemaps.cartocdn.com/light_only_labels/{z}/{x}/{y}.png'
Domainless URLs¶
Historically, CartoDB URLs were based on a
username.carto.com/PATH schema.
When Multiuser accounts were introduced, an alternate schema
organizationname.carto.com/u/username/PATH was built alongside the “classic” one.
Both schemas introduce some problems for opensource and/or custom installs of the platform,
as they require DNS changes each time a new user or organization is added.
Subdomainless urls are the answer to this problems. Modifying some configuration settings,
any CartoDB installation can be setup to work with a new schema,
carto.com/user/username/PATH.
The following sections details the steps to make it work and the limitations it has.
Configuration changes for Domainless URLs¶
For a default installation, app_config.yml contains this relevant values:
session_domain: '.localhost.lan' subdomainless_urls: false
To activate subdomainless urls, change to (notice the removed starting dot from session_domain:
session_domain: 'localhost.lan' subdomainless_urls: true
Non-default HTTP and HTTPs ports can also be configured here for REST API calls, with the following app_config.yml attributes:
# nil|integer. HTTP port to use when building urls. # Leave empty to use default (80) http_port: # nil|integer. HTTPS port to use when building urls. # Leave empty to use default (443) https_port:
Remember that as with other configuration changes, Rails application must be restarted to apply them.
Limitations¶
If you leave the dot at
session_domain having subdomainless urls, you will be forced
to always have a subdomain. Any will do, but must be present. If you remove the dot it
will work as intended without any subdomain.
When subdomainless urls are used, organizations will be ignored from the urls. In fact,
typing
whatever.carto.com/user/user1 and
carto.com/user/user1 is the same. The platform
will replicate the sent subdomain fragment to avoid CORS errors but no existing organization
checks will be performed. You should be able to use them, assign quota to the organization users, etc.
Common Data¶
This service uses the visualizations API to retrieve all the public datasets from a defined user and serve them as importable datasets to all the users of the platform through the data library options.
All can be configured through the
common_data settings section. If the
base_url
option is set, this will be the base url the service is going to use to build the URL to retrieve datasets.
For example:
common_data: protocol: 'https' username: 'common-data' base_url: '' format: 'shp'
Use as the base url to retrieve all the public datasets from that user.
This is the default behaviour in CartoDB, but if you want to use your own system and user for this purpose you
have to define the
username property pointing to the user that will provide the datasets in your own instance.
The URL in this case is going to be built using your instance base url. For example if your instance base url is and the config is:
common_data: protocol: 'https' username: 'common-data-user' format: 'shp'
the system populates the data library with the public datasets from...
The
format option is used to define the format of the file generated when you are importing one datasets from
the data library. When you import a dataset it uses a stored URL to download that dataset as a file, in the format
defined in the config, and import as your own dataset.
Separate folders¶
Default installation keeps logs, configuration files and assets under the standard Rails folder structure:
/log,
/config and
/public at Rails root (your installation directory). Some installations might be interested in
moving those directories outside Rails root in order to separate code and data. You can accomplish that with symbolic
links. Nevertheless, there are three environment variables that you can use instead:
RAILS_LOG_BASE_PATH: for example, setting it to
/var/cartowill use that as a base folder for log files, which will be stored at
/var/carto/log. Defaults to
Rails.root.
RAILS_CONFIG_BASE_PATH: for example, setting it to
/etc/cartowill make Rails open the application and database configuration files at
/etc/carto/conf/app_config.ymland
/etc/carto/conf/database.yml. Defaults to
Rails.root.
RAILS_PUBLIC_UPLOADS_PATH: sets assets base path, both static and dynamic. For example, setting it to
/var/carto/assetswill upload files (markers, avatars and so on) to
/var/carto/assets/uploads, but it also makes Rails server to load public assets (CSSs, JS…) from there. Defaults to
app_config[:importer]["uploads_path"]or
Rails.rootif it’s not present (due to backwards compatibility). If you use this variable you’ll need to do one onf the following:
- Use nginx to load the assets (recommended): making
/publicthe nginx default root will make nginx use the proper folders for assets, without requesting them to the Rails server:
root /opt/carto/builder/embedded/cartodb/public;.
- Copy or link assets (from
/<RAILS ROOT>/public) to public upload path folder. | https://cartodb.readthedocs.io/en/v4.11.41/configuration.html | 2019-05-19T09:39:07 | CC-MAIN-2019-22 | 1558232254731.5 | [] | cartodb.readthedocs.io |
Editing object code externally
Kentico provides a way to store the code of virtual objects in the file system in addition to the database. Having code files on a local disk allows you to edit code in external editors or manage it using a source control system.
Note: This feature only manages code. Other object data and settings remain only in the database and are NOT represented in the file system. The continuous integration solution provides a more complete solution if you wish to synchronize development objects using a source control system.
To store object code in the file system, open the System application and select the Virtual objects tab. The options in the Source control section allow you to select which objects are stored in the file system:
- To store object code in the file system, select the boxes next to the required object types and click Apply changes. The file are saved in the ~/CMSVirtualFiles folder.
- To move object code back into the database, uncheck the corresponding boxes and click Apply changes. Checked objects stay in the file system and unchecked objects are moved back into the database.
- Click Synchronize changes to database to copy the code from the files on the disk into the matching objects in the database.
Source control in Deployment mode
If Deployment mode is ON, you cannot configure the source control options for objects that require compilation (only for Web part containers and CSS stylesheets).
When using source control mode, you can still edit the code of objects through the Kentico administration interface. If you edit an object, the system displays the code from the corresponding file. Saving the code in the UI writes the data into both the file system and the database.
Limitations
- Do not apply hotfixes while using source control mode. Before you start the hotfix procedure, return files to the database. You can re-enable source control mode once the hotfix is applied.
- The Staging feature has limited support for synchronizing object code when using source control mode:
- On source servers, staging tasks are generated only if you edit code in the Kentico UI or after you synchronize changes from files into the database.
- On target servers, source control mode must be disabled if you wish to use incoming staging tasks to update object code.
Using source control on web application projects
When you enable source control on web application installations, the system cannot automatically integrate the created files into the Visual Studio project. If you wish to edit the code of objects directly within your web application project, perform the following steps:
Open the project in Visual Studio.
- Click Show all files at the top of the Solution Explorer.
- Right-click the CMSVirtualFiles folder and select Include in Project.
- Build the CMSApp project.
You can now edit the code files of objects in Visual Studio inside the CMSVirtualFiles folder. In source control mode, the system generates ascx files without code behind files, so you do not need to convert the files into the web application format.
Was this page helpful? | https://docs.kentico.com/k11/developing-websites/preparing-your-environment-for-team-development/editing-object-code-externally | 2019-05-19T09:52:21 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.kentico.com |
1. Close your Department Maintenance form and switch to the Visual LANSA editor. Add a BEGIN_LOOP/END_LOOP at the beginning of the UPDATE.Click event handling routine.
The loop should be performed 10 times using STD_COUNT as the loop count and display a message 'Message number is nn' containing STD_COUNT. Your code should look like the following:
Evtroutine Handling(#UPDATE.Click)
Begin_Loop Using(#STD_COUNT) To(10)
Message Msgtxt('Message number is ' + #std_count.asstring)
End_Loop
Update Fields(#FORMDATA) In_File(DEPTAB) Val_Error(*NEXT)
. . . . .
This provides a line of code that executes a number of times, each time the UPDATE.Click event routine runs.
2. Compile your form.
3. Run the form in debug using the toolbar button:
When the program stops at the Create_Instance routine, scroll down to the UPDATE.Click event handling routine and clear the breakpoint on the UPDATE statement.
There are a number of ways you can do this. For example:
a. Select this line of code and press F9 or use the Toggle Breakpoint button on the Debug ribbon.
b. Select this line of code and use the right mouse menu option, Remove Breakpoint.
c. Display the Breakpoints tab, select this breakpoint and use the Remove Breakpoint toolbar button.
4. Select the MESSAGE command inside the loop and press F9 to set this as a breakpoint.
5. Use the right mouse menu option while selecting the MESSAGE command to show the Breakpoint Properties dialog.
6. Set a Pass count of 3. In debug mode, during an update, the loop will now execute twice and break on every third execution of the MESSAGE command.
7. Execute the form in debug mode, Fetch a department and press the Update button. Debug will initially break on the MESSAGE command on the 3rd execution. Press F5 and the break will then occur on the 6th execution. Press F5 again to break on the 9th execution. Press F5 again and the Update routine will run and perform the table update.
8. Close your form.
9. Remove the breakpoint from the MESSAGE command. | https://docs.lansa.com/14/en/lansa095/content/lansa/frmeng01_0245.htm | 2019-05-19T09:30:00 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.lansa.com |
Overview of business continuity with Azure SQL Database
Business continuity in Azure SQL Database refers to the mechanisms, policies, and procedures that enable your business to continue operating in the face of disruption, particularly to its computing infrastructure. In the most of the cases, Azure SQL Database will handle the disruptive events that might happen in the cloud environment and keep your applications and business processes running. However, there are some disruptive events that cannot be handled by SQL Database such as:
- User accidentally deleted or updated a row in a table.
- Malicious attacker succeeded to delete data or drop a database.
- Earthquake caused a power outage and temporary disabled data-center.
These cases cannot be controlled by Azure SQL Database, so you would need to use the business continuity features in SQL Database that enables you to recover your data and keep your applications running.
This overview describes the capabilities that Azure SQL Database provides.
SQL Database provides several business continuity features, including automated backups and optional database replication that can mitigate these scenarios. First, you need to understand how SQL Database high availability architecture provides 99.99% availability and resiliency to some disruptive events that might affect your business process. Then, you can learn about the additional mechanisms that you can use to recover from the disruptive events that cannot be handled by SQL Database high availability architecture, such as:
- Temporal tables enable you to restore row versions from any point in time.
- Built-in automated backups and Point in Time Restore enables you to restore complete database to some point in time within the last 35 days.
- You can restore a deleted database to the point at which it was deleted if the SQL Database server has not been deleted.
- Long-term backup retention enables you to keep the backups up to 10 years.
- Active geo-replication enables you to create readable replicas and manually failover to any replica in case of a data center outage or application upgrade.
- Auto-failover group allows the application to automatically recovery in case of a data center outage.. The time required for application to fully recover is known as recovery time objective (RTO). You also need to understand the maximum period of recent data updates (time interval) the application can tolerate losing when recovering after the disruptive event. The time period of updates that you might afford to lose is known as recovery point objective (RPO).
The following table compares the ERT and RPO for each service tier for the most common scenarios.
Note
Manual database failover refers to failover of a single database to its geo-replicated secondary using the unplanned mode.
Recover a database to the existing server
SQL Database automatically performs a combination of full database backups weekly, differential database backups generally taken every 12 hours, and transaction log backups every 5 - 10 minutes to protect your business from data loss. The backups are stored in RA-GRS storage for 35 days for all service tiers except Basic DTU service tiers where the backups are stored for 7 days. For more information, see automatic database backups. You can restore an existing database form the automated backups to an earlier point in time as a new database on the same SQL Database server using the Azure portal, PowerShell, or the REST API. For more information, see Point-in-time restore.
If the maximum supported point-in-time restore (PITR) retention period is not sufficient for your application, you can extend it by configuring a long-term retention (LTR) policy for the database(s). For more information, see Long-term backup retention.
You can use these automatic database backups to recover a database from various disruptive events, both within your data center and to another data center. The recovery time is usually less than 12 hours. It may take longer to recover a very large or active database. Using automatic database backups, the estimated time of recovery depends on several factors including the total number of databases recovering in the same region at the same time, the database size, the transaction log size, and network bandwidth. For more information about recovery time, see database recovery time. When recovering to another data region, the potential data loss is limited to 1 hour with use of geo-redundant backups.
Use automated backups and point-in-time restore as your business continuity and recovery mechanism if your application:
- Is not considered mission critical.
- Doesn't have a binding SLA - a downtime of 24 hours or longer does not result in financial liability.
- Has a low rate of data change (low transactions per hour) and losing up to an hour of change is an acceptable data loss.
- Is cost sensitive.
If you need faster recovery, use active geo-replication or auto-failover groups. If you need to be able to recover data from a period older than 35 days, use Long-term retention.
Recover a database to another region. See the table earlier in this article for details of the auto-failover RTO and RPO.
Important
To use active geo-replication and auto-failover groups, you must either be the subscription owner or have administrative permissions in SQL Server. You can configure and fail over using the Azure portal, PowerShell, or the REST API using Azure subscription permissions or using Transact-SQL with SQL Server permissions..
When you take action, how long it takes you to recover, and how much data loss you incur depends upon how you decide to use these business continuity features following sections data center data center lstener, stand-alone databases and for elastic pools, see Design an application for cloud disaster recovery and Elastic pool disaster recovery strategies.
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/azure/sql-database/sql-database-business-continuity | 2019-05-19T09:19:05 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.microsoft.com |
System.
Net.
Namespace
Http
The System.Net.Http namespace provides a programming interface for modern HTTP applications.
Classes
Enums
Remarks
The System.Net.Http namespace is designed to provide the following:
HTTP client components that allow users to consume modern web services over HTTP.
HTTP components that can be used by both clients and servers (HTTP headers and messages, for example). This provides a consistent programming model on both the client and the server side for modern web services over HTTP.
The System.Net.Http namespace and the related System.Net.Http.Headers namespace provide the following set of components:
HttpClient - the primary class used to send and receive requests over HTTP.
HttpRequestMessage and HttpResponseMessage - HTTP messages as defined in RFC 2616 by the IETF.
HttpHeaders - HTTP headers as defined in RFC 2616 by the IETF.
HttpClientHandler - HTTP handlers responsible for producing HTTP response messages.
There are various HTTP message handles that can be used. These include the following.
DelegatingHandler - A class used to plug a handler into a handler chain.
HttpMessageHandler - A simple to class to derive from that supports the most common requirements for most applications.
HttpClientHandler - A class that operates at the bottom of the handler chain that actually handles the HTTP transport operations.
WebRequestHandler - A specialty class that operates at the bottom of the handler chain class that handles HTTP transport operations with options that are specific to the System.Net.HttpWebRequest object.
The contents of an HTTP message corresponds to the entity body defined in RFC 2616.
A number of classes can be used for HTTP content. These include the following.
ByteArrayContent - HTTP content based on a byte array.
FormUrlEncodedContent - HTTP content of name/value tuples encoded using application/x-www-form-urlencoded MIME type.
MultipartContent - HTTP content that gets serialized using the multipart/* content type specification.
MultipartFormDataContent - HTTP content encoded using the multipart/form-data MIME type.
StreamContent - HTTP content based on a stream.
StringContent - HTTP content based on a string.
If an app using the System.Net.Http and System.Net.Http.Headers namespaces intends to download large amounts of data (50 megabytes or more), then the app should stream those downloads and not use the default buffering. If the default buffering is used the client memory usage will get very large, potentially resulting in substantially reduced performance.
Classes in the System.Net.Http and System.Net.Http.Headers namespaces can be used to develop Windows Store apps or desktop apps. When used in a Windows Store app, classes in the System.Net.Http and System.Net.Http.Headers namespaces are affected by network isolation feature, part of the application security model used by the Windows 8. The appropriate network capabilities must be enabled in the app manifest for a Windows Store app for the system to allow network access by a Windows store app. For more information, see the Network Isolation for Windows Store Apps. | https://docs.microsoft.com/en-us/dotnet/api/system.net.http?view=netframework-4.8 | 2019-05-19T08:51:50 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.microsoft.com |
Why Does a Connection Pool Overflow?
This article may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist. To maintain the flow of the article, we've left these URLs in the text, but disabled the links.
Features
The .NET Connection Pool Lifeguard
Prevent pool overflows that can drown your applications
William Vaughn
Most ADO.NET data providers use connection pooling to improve the performance of applications built around Microsoft's disconnected .NET architecture. An application opens a connection (or gets a connection handle from the pool), runs one or more queries, processes the rowset, and releases the connection back to the pool. Without connection pooling, these applications would spend a lot of additional time opening and closing connections.
When you use ADO.NET connection pooling to manage connections for your Web-based applications and client/server Web service applications, your customers will usually get faster connections and better overall performance. But what happens when your application or Web site is suddenly flooded with customers who all want to connect at the same time? Will your applications sink or swim? Like a lifeguard, you need to monitor your connection pools carefully to maintain good performance and to prevent your pools from overflowing. Let's explore the reasons a connection pool might overflow, then see how you can write code or use Windows Performance Monitor to monitor your pools.
As I discussed in "Swimming in the .NET Connection Pool," May 2003, InstantDoc ID 38356, you need to know about many scalability and performance details when you use connection pooling. Remember that you need to monitor and manage two essential factors: the number of connections each pool manages and the number of connection pools. In an efficient production system, typically the number of pools is low (1 to 10) and the total number of connections in use is also low (fewer than 12). An efficient query takes less than a second to complete and disconnect. So even if hundreds of customers are accessing your Web site simultaneously, relatively few connections can often handle the entire load. To make your applications run efficiently, you must keep connection resources under control and monitor your pools' status so that you'll have some warning before they overflow and your customers start to complain—or go elsewhere.
Email discussion group participants often complain about how applications seem to work in testing but fail in production. Sometimes they report that their applications stop or bog down when about 100 clients get connected. Remember that the default number of connections in a pool is 100. If you try to open more than 100 connections from a pool, ADO.NET queues your application's connection request until a connection is free. The application (and its users) sees this as a delay in getting to the Web page or as an application lock-up. Let's look at how this problem arises.
In ADO.NET, the SqlClient .NET Data Provider gives you two techniques for opening and managing connections. First, when you need to manage the connection manually, you can use the DataReader object. With this method, your code constructs a SqlConnection object, sets the ConnectionString property, and uses the Open method to open a connection. After the code is finished with the DataReader, you close the SqlConnection before the SqlConnection object falls out of scope. To process the rowset, you can pass the DataReader to another routine in your application, but you still need to make sure that the DataReader and its connection are closed. If you don't close the SqlConnection, your code "leaks" a connection with each operation, so the pool accumulates connections and eventually overflows. Unlike in ADO and Visual Basic (VB) 6.0, the .NET garbage collector won't close the SqlConnection and clean up for you. Listing 1, which I walk through later, shows how I opened a connection and generated a DataReader to return the rowset from a simple query to stress the connection pool.
You can also run into problems when you use the DataAdapter object. The DataAdapter Fill and Update methods automatically open the DataAdapter object's connection and close it after the data I/O operation is complete. However, if the connection is already open when the Fill or Update method is executed, ADO.NET doesn't close the SqlConnection after the method completes. This is another opportunity to leak a connection.
In addition, you can also use COM-based ADO to create a connection from a .NET application. ADO pools these connections in the same way that ADO.NET does but doesn't give you a way to monitor the pool from your application as you can when you use the SqlClient ADO.NET Data Provider.
Indicting the DataReader
Orphaned connections and overflowing pools are serious problems, and judging by the number of newsgroup discussions about them, they're fairly common. The most likely culprit is the DataReader. To test the behavior of the DataReader, I wrote a sample Windows Forms (WinForms) application concentrating on the CommandBehavior.CloseConnection option. (You can download this application by entering InstantDoc ID 39031 at.) You can set this option when you use the SqlCommand object's ExecuteReader method to execute the query and return a DataReader. My test application shows that even when you use this option, if you don't explicitly close the DataReader (or SqlConnection), the pool overflows. The application then throws an exception when the code requests more connections than the pool will hold.
Some developers insist that if you set the CommandBehavior.CloseConnection option, the DataReader and its associated connection close automatically when the DataReader finishes reading the data. Those developers are partially right only if you've set the CommandBehavior.CloseConnection option.
If you execute a query by using another Execute method (e.g., ExecuteScalar, ExecuteNonQuery, ExecuteXMLReader), you are responsible for opening the SqlConnection object and, more importantly, closing it when the query finishes. If you miss a close, orphaned connections quickly accumulate.
Monitoring the Number of Connections
To test for orphaned connections and overflowing connection pools, I wrote a sample Web-form application. This application uses the same techniques you would typically use to return data from a query. (You can download a WinForms version of this code at.).
Bugs, comments, suggestions | Legal | Privacy | Advertising | https://docs.microsoft.com/en-us/previous-versions/sql/legacy/aa175863(v=sql.80) | 2019-05-19T08:57:33 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.microsoft.com |
Option for how to apply a force using Rigidbody2D.AddForce.
Use this to apply a certain type of force to a 2D RigidBody. There are two types of forces to apply: Force mode and Impulse Mode. For a 3D Rigidbody see ForceMode.
//This script adds force to a Rigidbody. The kind of force is determined by which buttons you click.
//Create a Sprite and attach a Rigidbody2D component to it //Attach this script to the Sprite
using UnityEngine; using UnityEngine.EventSystems;
public class AddingForce : MonoBehaviour { //Use to switch between Force Modes enum ModeSwitching { Start, Impulse, Force }; ModeSwitching m_ModeSwitching;
//Use this to change the different kinds of force ForceMode2D m_ForceMode; //Start position of the RigidBody, use to reset Vector2 m_StartPosition;
//Use to apply force to RigidBody Vector2 m_NewForce;
//Use to manipulate the RigidBody of a GameObject Rigidbody2D m_Rigidbody;
void Start() { //Fetch the RigidBody component attached to the GameObject m_Rigidbody = GetComponent<Rigidbody2D>(); //Start at first mode (nothing happening yet) m_ModeSwitching = ModeSwitching.Start;
//Initialising the force to use on the RigidBody in various ways m_NewForce = new Vector2(-5.0f, 1.0f);
//This is the RigidBody's starting position m_StartPosition = m_Rigidbody.transform.position; }
void Update() { //Switching modes depending on button presses switch (m_ModeSwitching) { //This is the starting mode which resets the GameObject case ModeSwitching.Start:
//Reset to starting position of RigidBody m_Rigidbody.transform.position = m_StartPosition; //Reset the velocity of the RigidBody m_Rigidbody.velocity = new Vector2(0f, 0f); break;
//This is the Force Mode case ModeSwitching.Force: //Make the GameObject travel upwards m_NewForce = new Vector2(0, 1.0f); //Use Force mode as force on the RigidBody m_Rigidbody.AddForce(m_NewForce, ForceMode2D.Force); break;
//This is the Impulse Mode case ModeSwitching.Impulse: //Make the GameObject travel upwards m_NewForce = new Vector2(0f, 1.0f); //Use Impulse mode as a force on the RigidBody m_Rigidbody.AddForce(m_NewForce, ForceMode2D.Impulse); break; } }
//These are the Buttons for telling what Force to apply as well as resetting void OnGUI() { //If reset button pressed if (GUI.Button(new Rect(100, 0, 150, 30), "Reset")) { //Switch to start/reset case
m_ModeSwitching = ModeSwitching.Start; }
//Impulse button pressed if (GUI.Button(new Rect(100, 60, 150, 30), "Apply Impulse")) { //Switch to Impulse mode (apply impulse forces to GameObject)
m_ModeSwitching = ModeSwitching.Impulse; }
//Force Button Pressed if (GUI.Button(new Rect(100, 90, 150, 30), "Apply Force")) { //Switch to Force mode (apply force to GameObject) m_ModeSwitching = ModeSwitching.Force; } } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/ScriptReference/ForceMode2D.html | 2019-05-19T09:01:38 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.unity3d.com |
Pretenders: Configurable Fake Servers¶
Pretenders are Mocks for network applications. They are mainly designed to be used in system integration tests or manual tests where there is a need to simulate the behaviour of third party software that is not necessarily under your control.
Installation¶
Simply type:
$ pip install pretenders
If you want to run the UI, install like this, so that extra dependencies for the frontend are included:
$ pip install pretenders[ui]
Example usage¶
Start the server to listen on all network interfaces:
$ python -m pretenders.server.server --host 0.0.0.0 --port 8000
If you prefer, you can run the pretenders server in docker:
docker run -d --name pretenders -p 8000 pretenders/pretenders:1.4
HTTP mock in a test case¶
Sample HTTP mocking test case:
from pretenders.client.http import HTTPMock from pretenders.common.constants import FOREVER # Assume a running server # Initialise the mock client and clear all responses mock = HTTPMock('localhost', 8000) # For GET requests to /hello reply with a body of 'Hello' mock.when('GET /hello').reply('Hello', times=FOREVER) # For the next three POST or PUT to /somewhere, simulate a BAD REQUEST status code mock.when('(POST|PUT) /somewhere').reply(status=400, times=3) # For the next request (any method, any URL) respond with some JSON data mock.reply('{"temperature": 23}', headers={'Content-Type': 'application/json'}) # For the next GET request to /long take 100 seconds to respond. mock.when('GET /long').reply('', after=100) # If you need to reply different data depending on request body # Regular expression to match certain body could be provided mock.when('POST /body_requests', body='1.*').reply('First answer', times=FOREVER) mock.when('POST /body_requests', body='2.*').reply('Second answer', times=FOREVER) # Your code is exercised here, after setting up the mock URL myapp.settings.FOO_ROOT_URL = mock.pretend_url ... # Verify requests your code made r = mock.get_request(0) assert_equal(r.method, 'GET') assert_equal(r.url, '/weather?city=barcelona')
HTTP mocking for remote application¶
Sometimes it is not possible to alter the settings of a running remote application on the fly. In such circumstances you need to have a predetermined url to reach the http mock on so that you can configure correctly ahead of time.
Let’s pretend we have a web app that on a page refresh gets data from an external site. We might write some tests like:
from pretenders.client.http import HTTPMock from pretenders.constants import FOREVER mock = HTTPMock('my.local.server', 9000, timeout=20, name="third_party") def setup_normal(): mock.reset() mock.when("GET /important_data").reply( '{"account": "10000", "outstanding": "10.00"}', status=200, times=FOREVER) def setup_error(): mock.reset() mock.when("GET /important_data").reply('ERROR', status=500, times=FOREVER) @with_setup(setup_normal) def test_shows_account_information_correctly(): # Get the webpage ... # Check that the page shows things correctly as we expect. ... @with_setup(setup_error) def test_application_handles_error_from_service(): # Get the webpage ... # Check that the page gracefully handles the error that has happened # in the background. ...
If you have a test set like the one above you know in advance that your app needs to be configured to point to:
instead of the actual third party’s website.
SMTP mock in a test case¶
Sample SMTP mocking test case:
# Create a mock smtp service smtp_mock = SMTPMock('localhost', 8000) # Get the port number that this is faking on and # assign as appropriate to config files that the system being tested uses settings.SMTP_HOST = "localhost:{0}".format(smtp_mock.pretend_port) # ...run functionality that should cause an email to be sent # Check that an email was sent email_message = smtp_mock.get_email(0) assert_equals(email_message['Subject'], "Thank you for your order") assert_equals(email_message['From'], "[email protected]") assert_equals(email_message['To'], "[email protected]") assert_true("Your order will be with you" in email_message.content)
Source code¶
Sources can be found at
Contributions are welcome. | https://pretenders.readthedocs.io/en/latest/ | 2019-05-19T09:14:38 | CC-MAIN-2019-22 | 1558232254731.5 | [] | pretenders.readthedocs.io |
Installing and Upgrading¶
Hardware from the pfSense Store is pre-loaded with pfSense software. To reinstall pfSense software or to install it to other hardware, download an installer image as described in this chapter.
Warning
Hardware pre-loaded with pfSense software from commercial vendors other than the pfSense Store or authorized partners must not be trusted. Third parties may have made unauthorized, unknown alterations or additions to the software. Selling pre-loaded copies of pfSense software is a violation of the Trademark Usage Guidelines.
If pfSense software was pre-loaded on third party hardware by a vendor, wipe the system and reinstall it with a genuine copy.
If something goes wrong during the installation process, see Installation Troubleshooting.
This chapter also covers upgrading pfSense software installations (Upgrading an Existing Installation) which keeps them up-to-date with the latest security, bug fixes, and new features. | http://docs.netgate.com/pfsense/en/latest/book/install/index.html | 2019-05-19T08:25:40 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.netgate.com |
Collecting system level diagnostics
Steps to collect system-wide performance information about a cluster using the DSE.
dse.yamlThe location of the dse.yaml file depends on the type of installation:
Procedure
To collect system level data:
- Edit the dse.yaml file.
- In the dse.yaml file, set the enabled option for
cql_system_info_optionsto true.
cql_system_info. | https://docs.datastax.com/en/dse/6.0/dse-admin/datastax_enterprise/mgmtServices/performance/collectingSystemLevelDiagnostics.html | 2019-05-19T08:49:32 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.datastax.com |
App services are deprecated marked for removal. Use a published web service or a published REST service instead.
1 Introduction
In this how-to, you will create an app in which you keep track of the inventory of a shop. This app makes the inventory available for use in other Mendix apps via an app service.
2 Creating the Domain Model
The domain model defines the products that you want the save and how you want to expose them.
To create the domain model, follow these steps:
- In the domain model, create a persistable entity called Product with two attributes: Name (String) and Stock (Integer).
Create a non-persistable entity called PublishedProduct with the same attributes.
4 Creating Pages
To create pages that allows users to create, edit, and delete products, follow these steps:
- Add a new page called EditUser.
- Add a data view.
- From the Connector, drag the Product entity onto to yellow [Unknown] bar.
On the dialog box that appears, click OK.
Open the Homepage and add a data grid.
From the Connector, drag the Product entity onto to yellow [Unknown] bar.
On the dialog box that appears, click OK.
From the Project Explorer, drag the EditUser page onto the New button.
From the Project Explorer, drag the EditUser page onto the Edit [default] button.
5 Creating a Microflow
We will now create a microflow that retrieves all the products from the database and converts them to published products.
To create this microflow, follow these steps:
- Add a microflow called PublishProducts.
- From the Toolbox, drag a Retrieve activity onto the microflow.
- Double-click the Retrieve activity and select From database.
- For Entity, select Product, then click OK.
- From the Toolbox, drag a Create list activity onto the microflow.
- Double-click the Create list activity.
- For Entity, select PublishedProduct, then click OK.
- From the Toolbox (under Other), drag a Loop activity onto the microflow.
- Double-click the loop and for Iterate over, select ProductList, then click OK.
- From the Toolbox, drag a Create activity onto the loop.
- Double-click the Create activity and for Entity, select PublishedProduct.
- Click New, and for Member, select Name.
- For Value, enter $IteratorProduct/Name, then click OK.
- Click New, and for Member, choose Stock.
- For Value, enter $IteratorProduct/Stock, then click OK.
- From the Toolbox, drag a Change list activity onto the loop
- Connect the Create PublishedProduct activity to the Add to list activity.
- Double-click the Add to list activity and for Variable name, select PublishedProductList.
- For Value, enter $NewPublishedProduct, then click OK.
- Double-click the red end event and for Type, select List.
- For Entity, select PublishedProduct.
For Return value, enter $PublishedProductList, then click OK.
6 Creating an App Service
You will use the microflow to create an app service that exposes the products to other apps. To accomplish this, follow these steps:
- In Project Explorer, right-click a module and choose Add > Published services > Published app service.
- For Name, enter Shop.
- Click Create version.
- Go to the Actions tab and click New.
- For Name, enter Products.
- For Microflow, select PublishProducts, then click OK.
- Go to the Settings tab and for Authentication, select Username and password.
- Go to the General tab and for Status, select Consumable, then click OK.
- A dialog box will ask whether you want to make this version available. Click OK.
7 Securing the App
Before you publish our app, you need to make sure it is protected with a username and password. To accomplish this, follow these steps:
- In Project explorer, double-click Project > Security.
- For Security level, select Prototype/demo.
- Click Edit module security.
- Go to the Page access tab and check all the check boxes.
- Go to the Microflow access tab and check all the check boxes, then click OK.
- Go to the Administrator tab.
- Type a password, and then click OK.
8 Publishing the App Service
You can now go ahead and deploy the app. This will publish your app service. | https://docs.mendix.com/howto/integration/publish-data-to-other-mendix-apps-using-an-app-service | 2019-05-19T08:24:53 | CC-MAIN-2019-22 | 1558232254731.5 | [] | docs.mendix.com |
Restoring from Backups¶
Backups are not useful without a means to restore them, and by extension, test them. pfSense offers several means for restoring configurations. Some are more involved than others, but each will have the same end result: a running system identical to when the backup was made.
Restoring with the WebGUI¶
The easiest way for most users to restore a configuration is by using the WebGUI:
- Navigate to Diagnostics > Backup & Restore
- Locate the Restore configuration section (Figure WebGUI Restore).
- Select the area to restore (typically ALL )
- Click Browse
- Locate the backup file on the local PC
- Click Restore Configuration
The configuration will be applied, and the firewall will reboot with the settings obtained from the backup file.
WebGUI Restore.
Restoring from the Config History¶
For minor problems, using one of the internal backups on the pfSense firewall is the easiest way to back out a change. On full installations, the previous 30 configurations are stored in the Configuration History, along with the current running configuration. On NanoBSD, 5 configurations are stored. Each row shows the date that the configuration file was made, the configuration version, the user and IP address of a person making a change in the GUI, the page that made the change, and in some cases, a brief description of the change that was made. The action buttons to the right of each row will show a description of what they do when the mouse pointer is hovered over the button.
To restore a configuration from the history:
- Navigate to Diagnostics > Backup & Restore
- Click the Config History tab (Figure Configuration History).
- Locate the desired backup in the list
- Click
to restore that configuration file
Configuration History
The configuration will be restored, but a reboot is not automatic where required. Minor changes do not require a reboot, though reverting some major changes will.
If a change was only made in one specific section, such as firewall rules, trigger a refresh in that area of the GUI to enable the changes. For firewall rules, a filter reload would be sufficient. For OpenVPN, editing and saving the VPN instance would be enough. The necessary actions to take depend on what changed in the config, but the best way ensure that the full configuration is active would be a reboot. If necessary, reboot the firewall with the new configuration by going to Diagnostics > Reboot System and click Yes.
Previously saved configurations may be deleted by clicking
, but
do not delete them by hand to save space; the old configuration backups are
automatically deleted when new ones are created. It is desirable to remove a
backup from a known-bad configuration change to ensure that it is not
accidentally restored.
A copy of the previous configuration may be downloaded by clicking
.
Config History Settings¶
The amount of backups stored in the configuration history may be changed if needed.
- Navigate to Diagnostics > Backup & Restore
- Click the Config History tab
- Click
at the right end of the Saved Configurations bar to expand the settings.
- Enter the new number of configurations to retain
- Click Save
Along with the configuration count, the amount of space consumed by the current backups is also displayed.
Config History Diff¶
The differences between any two configuration files may be viewed in the Config History tab. To the left of the configuration file list there are two columns of radio buttons. Use the leftmost column to select the older of the two configuration files, and then use the right column to select the newer of the two files. Once both files have been selected, click Diff at either the top or bottom of the column.
Console Configuration History¶
The configuration history is also available from the console menu as option
15, Restore Recent Configuration. The menu selection will list recent
configuration files and allow them to be restored. This is useful if a recent
change has locked administrators out of the GUI or taken the system off the
network.
Restoring by Mounting the Disk¶
This method is popular with embedded users. When the CF or disk from the pfSense firewall is attached to a computer running FreeBSD, the drive may be mounted and a new configuration may be copied directly onto the installed system, or a config from a failed system may be copied off.
Note
This can also be performed on a separate pfSense firewall in place of a computer running FreeBSD, but do not use an active production firewall for this purpose. Instead, use a spare or test firewall.
The
config.xml file is kept in
/cf/conf/ for both NanoBSD and full
installs, but the difference is in the location where this directory resides.
For NanoBSD installs, this is on a separate slice, such as
ad0s3 if the
drive is
ad0. Thanks to GEOM (modular storage framework) labels on recent
versions of FreeBSD and in use on NanoBSD-based embedded filesystems, this
slice may also be accessed regardless of the device name by using the label
/dev/ufs/cf. For full installs, it is part of the root slice (typically
ad0s1a). The drive names will vary depending on type and position in the
system.
NanoBSD Example¶
First, connect the CF to a USB card reader on a FreeBSD system or another
inactive pfSense system (see the note in the previous section). For most, it
will show up as
da0. Console messages will also be printed reflecting the
device name, and the newly available GEOM labels.
Now mount the config partition:
# mount -t ufs /def/ufs/cf /mnt
If for some reason the GEOM labels are not usable, use the device directly such
as
/dev/da0s3.
Now, copy a config onto the card:
# cp /usr/backups/pfSense/config-alix.example.com-20090606185703.xml \ /mnt/conf/config.xml
Then be sure to unmount the config partition:
# umount /mnt
Unplug the card, reinsert it into the firewall, and turn it on again. The firewall will now be running with the previous configuration.
To copy the configuration from the card, the process is the same but the
arguments to the
cp command are reversed. | https://docs.netgate.com/pfsense/en/latest/book/backup/restoring-from-backups.html | 2019-05-19T09:25:55 | CC-MAIN-2019-22 | 1558232254731.5 | [array(['../_images/backup-restore.png', '../_images/backup-restore.png'],
dtype=object)
array(['../_images/backup-confighistory.png',
'../_images/backup-confighistory.png'], dtype=object)] | docs.netgate.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.