content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Texture Tools
The following buttons are available from the SU tab on the Designer Menu panel.
Smoothing Group
Used for assigning numbers to faces. Faces with the same numbers and connected by an edge are rendered smoothly. A seam will be displayed between two faces with different smoothing group IDs. The following functions are available:
Parameters
UV Mapping
Materials can be assigned to each face differently and you can manipulate the UV coordinates using this tool.
Parameters | https://docs.aws.amazon.com/lumberyard/latest/legacyreference/entities-designer-tool-texture.html | 2018-12-10T00:14:56 | CC-MAIN-2018-51 | 1544376823228.36 | [array(['images/entities-designer-uvmap.png', None], dtype=object)] | docs.aws.amazon.com |
Sets the type of argument values.
Sets the minimum distance between two neighboring major ticks in pixels.
Specifies the order of categories on a axis of the discrete type.
Specifies whether ticks and grid lines should cross axis labels or lie between them. Applies only to the axes of the "discrete" type.
Specifies whether grid lines are visible.
Sets a value that specifies chart elements to be highlighted when a user points to an axis label.
Inverts the axis.
Provides access to the axis label settings.
Specifies the value to be raised to a power when generating ticks for an axis of the logarithmic type.
Coupled with the BootstrapChartArgumentAxisBuilder.MinValue method, focuses the widget on a specific chart segment. Applies only to the axes of the "continuous" and "logarithmic" type.
Coupled with the BootstrapChartArgumentAxisBuilder.MaxValue option,.
Specifies the position in which the argument axis is displayed..
Specifies whether the axis line is visible. | https://docs.devexpress.com/ASPNETCoreBootstrap/DevExpress.AspNetCore.Bootstrap.BootstrapChartArgumentAxisBuilder._methods | 2018-12-10T00:27:07 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.devexpress.com |
SWbemSink object
The SWbemSink object is implemented by client applications to receive the results of asynchronous operations and event notifications. To make an asynchronous call, you must create an instance of an SWbemSink object and pass it as the ObjWbemSink parameter. The events in your implementation of SWbemSink are triggered when status or results are returned, or when the call is complete. The VBScript CreateObject call creates this object.
The SWbemSink object has these types of members:
Methods
The SWbemSink object has these methods.
Remarks
An asynchronous callback allows a non-authenticated user to provide data to the sink. This poses security risks to your scripts and applications. To eliminate the risks, use either semisynchronous communication or synchronous communication. For more information, see Calling a Method.
Events
You can implement subroutines to be called when events are triggered. For example, if you want to process each object that is returned by an asynchronous query call such as SWbemServices.ExecQueryAsync, create a subroutine using the sink that is specified in the asynchronous call, as shown in the following example.
Sub SinkName_OnObjectReady(objObject, objAsyncContext)
Use the following table as a reference to identify events and trigger descriptions.
Asynchronously Retrieving Event Log Statistics
WMI supports both asynchronous and semi-synchronous scripts. When retrieving events from the event logs, asynchronous scripts often retrieve this data much faster.
In an asynchronous script, a query is issued and control is immediately returned to the script. The query continues to process on a separate thread while the script begins to immediately act on the information that is returned. Asynchronous scripts are event driven: each time an event record is retrieved, the OnObjectReady event is fired. When the query has completed, the OnCompleted event will fire, and the script can continue based on the fact that all the available records have been returned.
In a semi-synchronous script, by contrast, a query is issued and the script then queues a large amount of retrieved information before acting upon it. For many objects, semi-synchronous processing is adequate; for example, when querying a disk drive for its properties, there might be only a split second between the time the query is issued and the time the information is returned and acted upon. This is due in large part to the fact that the amount of information returned is relatively small.
When querying an event log, however, the interval between the time the query is issued and the time that a semi-synchronous script can finish returning and acting on the information can take hours. On top of that, the script might run out of memory and fail on its own before completing.
For event logs with a large number of records, the difference in processing time can be considerable. On a Windows 2000-based test computer with 2,000 records in the event log, a semi-synchronous query that retrieved all the events and displayed them in a command window took 10 minutes 45 seconds. An asynchronous query that performed the same operation took one minute 54 seconds.
Examples
The following VBScript asynchronously queries the event logs for all records.
Const POPUP_DURATION = 10 Const OK_BUTTON = 0 Set objWSHShell = Wscript.CreateObject("Wscript.Shell") strComputer = "." Set objWMIService = GetObject("winmgmts:" _ & "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2") Set objSink = WScript.CreateObject("WbemScripting.SWbemSink","SINK_") objWMIService.InstancesOfAsync objSink, "Win32_NTLogEvent" errReturn = objWshShell.Popup("Retrieving events", POPUP_DURATION, _ "Event Retrieval", OK_BUTTON) Sub SINK_OnCompleted(iHResult, objErrorObject, objAsyncContext) WScript.Echo "Asynchronous operation is done." End Sub Sub SINK_OnObjectReady(objEvent, objAsyncContext) End Sub | https://docs.microsoft.com/en-us/windows/desktop/wmisdk/swbemsink | 2018-12-10T00:13:49 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.microsoft.com |
Android openHAB App
We provide a native Android app for openHAB. It uses the REST API of openHAB to render sitemaps of your openHAB installation. It also supports myopenhab.org including push notifications. The latest release version of the app is always available through Google Play.
Features:
- View openHAB sitemaps
- Control openHAB remotely
- Multiple themes available
- Push notifications
- Voice commands
- Thing discovery via app
- Support for Username/password or SSL client authentication
- Selection of a default sitemap
Screenshots:
Getting Started:
When first installed the app is in “Demo Mode”. To connect it to your own openHAB server, first navigate to Settings and uncheck the “Demo Mode” option. Normally, after unchecking the Demo Mode, the app will be able to use multicast DNS to autodetect to your openHAB server if it is on the same network.
You also have the option to manually set the server URL in the settings. Please enter the base URL to your openHAB server as you would enter it in the browser to reach the openHAB dashboard. The URL might look like one of the following examples.
- IP address:
- Local DNS name:(depending on your network)
Once the URL is set correctly, the display of the app will be determined by the sitemaps defined on your server.
The option to set a “Remote URL” allows the app to be used when you are away from home. There are a number of strategies available to provide secure remote access to your openHAB server.
Help and technical details:
Please refer to the openhab/android project on GitHub for more details. | https://docs.openhab.org/v2.1/addons/uis/apps/android.html | 2018-12-10T00:13:45 | CC-MAIN-2018-51 | 1544376823228.36 | [array(['images/android_01.png', 'Demo Overview'], dtype=object)
array(['images/android_02.png', 'Demo Widget Overview'], dtype=object)] | docs.openhab.org |
Positioning an Element Using the Transform Tool.
—see Positioning an Element Using the Advanced Animation Tools.
The pivot point appears in the Camera view.
_20<<
You can display a rotation handle on the bounding box when transforming a layer. In the Preferences dialog box, select the Camera tab and then select the Use Rotation Lever with Transformation Tools option. This preference is off by default.
Transform Tool Properties
The Lasso and Marquee options let you choose the type of selection the current tool will perform. The default selection mode is Marquee.
Hold down the Alt key to switch to the opposite mode of your selection..
When transforming or repositioning a layer using the Transform tool, you can enable different snap options to help you.
The Hide Manipulator Controls
button lets you hide the bounding box and manipulator controls from the Camera view when an element is selected.
The Flip Horizontal
and Flip Vertical
buttons let you flip the selected element horizontally or vertically. You can also select Animation > Flip > Flip Horizontal and Flip Vertical from the top menu or press 4 or 5.
By default, when you draw a selection box in the Camera view, the Select tool will select only the drawing strokes of the current drawing. If you prefer the Select tool to select all the strokes on all layers, deselect the Works on Single Drawing
button.
The Rotate 90 Degrees CW
and Rotate 90 Degrees CCW
operations rotate the current selection 90 degrees clockwise or counter-clockwise.
The Width
and Height
fields allow you to enter specific values for accurately resizing a selected layer.
Use the Offset X
and Offset Y
fields to enter specific values to reposition the selected layer accurately.
| https://docs.toonboom.com/help/harmony-11/workflow-network/Content/_CORE/_Workflow/026_Scene_Setup/013_H2_Positioning_Using_Transform_Tool.html | 2018-12-09T23:35:44 | CC-MAIN-2018-51 | 1544376823228.36 | [array(['../../../Resources/Images/_ICONS/Home_Icon.png', None],
dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stage.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/draw.png',
'Toon Boom Harmony 11 Draw Online Documentation'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/sketch.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/controlcenter.png',
'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/scan.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stageXsheet.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/SceneSetup/HAR11/HAR11_AdvancedToolvsTransformTool.png',
None], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/SceneSetup/an_pivot_tmp_01.png',
None], dtype=object)
array(['../../../Resources/Images/HAR/Stage/SceneSetup/an_pivot_tmp_02.png',
None], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/EDU/HAR/Student/Steps/an_transform_rotate.png',
None], dtype=object)
array(['../../../Resources/Images/HAR/Stage/SceneSetup/rotate_box.png',
None], dtype=object)
array(['../../../Resources/Images/HAR/Stage/SceneSetup/an_transform_rotate.png',
None], dtype=object)
array(['../../../Resources/Images/HAR/Stage/SceneSetup/rotation_handle.png',
None], dtype=object)
array(['../../../Resources/Images/HAR/Stage/SceneSetup/HAR11/HAR11_transform_tool_properties.png',
None], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Drawing/an_multipleselection.png',
None], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Drawing/an_singleselection.png',
None], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Drawing/an_rotate.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/SceneSetup/HAR11/HAR11_width_height.png',
None], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/SceneSetup/HAR11/HAR11_x_y.png',
None], dtype=object) ] | docs.toonboom.com |
3.0 Release Notes¶
3.0.2¶
See the bugs fixed in 3.0.2.
3.0.1¶
See the bugs fixed in 3.0.1.
3.0.0¶
Manifest List V2 schema version 2 is now supported. It can be normally synced into Pulp from docker registry, published and served by Crane. As a relust new unit type ManifestList was introduced in 3.0. Support for image manifest V2 schema version 1 and schema version 2 did not change.
Publish directory structure for manifests has been changed to manage more effectively manifest lists. Now each manifest schema version has its own directory manifests/1, manifests/2 and manifests/list. This change will not affect already published content, it will take place with the new publish action.
A new redirect file format has been introduced to enable Crane to serve both schema versions.
Existing command, docker repo tag now accepts instead of –manifest-digest –digest option. | https://docs.pulpproject.org/plugins/pulp_docker/user-guide/release-notes/3.0.x.html | 2018-12-09T23:51:14 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.pulpproject.org |
Data Protection and Recovery Agents
A single platform for automated global protection, retention, and recovery
SnapProtect. SnapProtect products go beyond better backup and recovery to help you improve your workforce productivity and business efficiency.
A comprehensive data protection and management strategy offers seamless and efficient backup, archiving, storage, and recovery of data in your enterprise from any operating system, database, and application. Detailed information about SnapProtect data protection products is available in the following categories: | http://docs.snapprotect.com/netapp/v11/article?p=landing_pages/c_data_protection.htm | 2018-12-10T00:41:31 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.snapprotect.com |
Custom branding of payment pages
GOV.UK Pay supports custom branding on payment pages.
You can contact the GOV.UK Pay team to request customisation for the following features:
- the logo displayed in the left-hand side of the top banner
- the background and border colour of the top banner
- your payment confirmation email from GOV.UK Notify
Banner logo
You should provide an image of your desired custom banner logo to GOV.UK Pay directly. Your image should be:
- in PNG or SVG format
- at least 2x the size of the image that will be displayed on-screen
- cropped to leave minimal whitespace around the logo
- compressed (optimised for web)
Banner background and border colour
You can make a request for custom colours to GOV.UK Pay directly.
Colours are customised using hexadecimal representations. | https://govukpay-docs.cloudapps.digital/optional_features/custom_branding/ | 2018-12-10T00:39:55 | CC-MAIN-2018-51 | 1544376823228.36 | [] | govukpay-docs.cloudapps.digital |
Wwise Controls Pane
The controls in the Wwise Controls pane are middleware-specific.
To filter displayed controls
In the Audio Controls Editor, for Wwise Controls, enter your search term into the Search bar.
To hide controls that are already assigned
Select Hide Assigned. The unassigned controls appear in orange text.
To create connections between ATL controls and middleware-specific controls
In the Wwise Controls pane, select and drag a control to the Connected Controls area of the Inspector pane.
To create a control
In the Wwise Controls pane, select and drag a middleware control to the ATL Controls pane.
This creates a new control, which shares the same name of the middleware control. The middleware control and the ATL control are also automatically connected.
To preview the control, choose File, Save All. | https://docs.aws.amazon.com/lumberyard/latest/userguide/audio-atl-editor-middleware.html | 2018-12-10T00:12:34 | CC-MAIN-2018-51 | 1544376823228.36 | [array(['images/audio-atl-editor-connected.png',
'Drag the selected control to the Connected Controls area of the Inspector pane.'],
dtype=object) ] | docs.aws.amazon.com |
New-Net
IPAddress
Syntax
New-NetIPAddress [-IPAddress] <String> -InterfaceAlias <String> [-DefaultGateway <String>] [-AddressFamily <AddressFamily>] [-Type <Type>] [-PrefixLength <Byte>] [-ValidLifetime <TimeSpan>] [-PreferredLifetime <TimeSpan>] [-SkipAsSource <Boolean>] [-PolicyStore <String>] [-CimSession <CimSession[]>] [-ThrottleLimit <Int32>] [-AsJob] [-WhatIf] [-Confirm] [<CommonParameters>]>]
Description
The New-NetIPAddress cmdlet creates and configures an IP address. To create a specific IP address object, specify either an IPv4 address or an IPv6 address, and an interface index or interface alias. We recommend that you define the prefix length, also known as a subnet mask, and a default gateway.
If you run this cmdlet to add an IP address to an interface on which DHCP is already enabled, then DHCP is automatically disabled. If Duplicate Address Detection (DAD) is enabled on the interface, the new IP address is not usable until DAD successfully finishes, which confirms the uniqueness of the IP address on the link.
Examples
Example 1: Add an IPv4 address
PS C:\>New-NetIPAddress -InterfaceIndex 12 -IPAddress 192.168.0.1 -PrefixLength 24 -DefaultGateway 192.168.0.5 The second command removes the IPv4 address. To remove the IPv4 address, use the Remove-NetIPAddress cmdlet. PS C:\>Remove-NetIPAddress -IPAddress 192.168.0.1 -DefaultGateway 192.168.0.5
The first command adds a new IPv4 address to the network interface at index 12. The PrefixLength parameter specifies the subnet mask for the IP address. In this example, the PrefixLength of 24 equals a subnet mask of 255.255.255.0. When you add an IPv4 address, the address specified for the Default Gateway must be in the same subnet as the IPv4 address that you add.
Required Parameters
Specifies the IPv4 or IPv6 address to create.
Specifies an alias of a network interface. The cmdlet creates an IP address for the alias.
Specifies an index of a network interface. The cmdlet creates an IP address for the index.
Optional Parameters
Specifies an IP address family. The cmdlet creates an IP address for the family. If you do not specify this parameter, the property is automatically generated. The acceptable values for this parameter are:
- IPv4
- IPv6v4 address or IPv6 address of the default gateway for the host. Default gateways provide a default route for TCP/IP hosts to use when communicating with other hosts on remote networks..
Specifies a preferred lifetime, as a TimeSpan object, for an IP address. To obtain a TimeSpan object, use the New-TimeSpan cmdlet.
Specifies a prefix length. This parameter defines the local subnet size, and is also known as a subnet mask.
Indicates whether an address is a primary IP address. This parameter identifies the primary IP address for outgoing traffic in a multiple IP address scenario. If this parameter is set to True, the addresses are not used for outgoing traffic and are not registered in DNS. IP address type. The acceptable values for this parameter are:
-- Unicast -- Anycast
The default value is Unicast.
Specifies a valid lifetime value, as a TimeSpan object, for an IP address. To obtain a TimeSpan object, use the New-TimeSpan cmdlet.
Shows what would happen if the cmdlet runs. The cmdlet is not run.
Inputs
None
Outputs. | https://docs.microsoft.com/en-us/powershell/module/nettcpip/new-netipaddress?view=win10-ps | 2018-12-09T23:46:54 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.microsoft.com |
Extending Aegir
This section shows how to modify Aegir to suit your unique use case.
Aegir is designed to be easily extendable by developers. As it is made with Drupal and Drush, it is made of the hooks and command you know and love. If you are a user or admin looking to deploy contrib modules, you should perhaps.
- Use the powers of Drupal to extend the content types and views. | https://docs.aegirproject.org/extend/ | 2019-08-17T15:56:29 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.aegirproject.org |
Create Volume from Snapshot
You can create a volume from a snapshot. You can create bootable volumes to create new virtual machine instances that can use such volume as the primary disk. The time taken to create the volume depends on the size of the snapshot.
You must be a self-service user or an administrator to perform this operation.
To create a volume from a snapshot, follow the steps given below.
- Log in to Clarity.
- Click Volumes and Snapshots in the left panel.
- Click Create New Volume seen on the top right corner.
- Select the Snapshot option in Create From.
- Select the option for the required snapshot from the list of snapshots.
- Click Next.
Enter the following details for the volume.
- Select the Bootable check box to create a bootable volume.
- the snapshot.
Tip: Refresh the page if you are unable to see the new volume.
The volume can be attached to an instance. You can take a snapshot of the volume or upload the volume as an image. | https://docs.platform9.com/user-guide/volumes/create-volume-from-snapshot/ | 2019-08-17T16:01:04 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.platform9.com |
How to run a local project in live configuration¶
The Local, Test and Live server environments are as identical as possible, to help guarantee that your applications will run in just the same way across all of them.
However, there are a few differences. See Default project conditions for the default configuration in each environment.
Occasionally, you may wish to run the local server in a configuration closer to the live set-up. A few steps are needed to achieve this.
Turn off Django
DEBUG mode¶
Set a couple of environment variables in the file
.env-local:
DEBUG=False STAGE=live
Collect static files¶
Gather static files to be served, using
collectstatic. Run:
docker-compose run --rm web python manage.py collectstatic
Run the
migrate command¶
Run:
docker-compose run --rm web start migrate
This runs the commands listed in the
MIGRATION_COMMANDS setting, populated by applications using the addons
framework, that are executed in Cloud deployments.
Use the production web server¶
Use the production web server (using uWSGI, and serving static files) rather than the Django runserver. In the docker-compose.yml file, change:
command: python manage.py runserver 0.0.0.0:80
to:
command: start web
Now when you start the local server, it will behave more like the live server. | http://docs.divio.com/en/latest/how-to/local-in-live-mode.html | 2019-08-17T15:09:14 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.divio.com |
OPEN TELEKOM CLOUD
Services
Getting Started
API
Solution
CLI&SDK
Third Party
Web Application Firewall
API Reference
API Usage Guidelines
APIs
Resource Quotas
Obtaining Package Information
Querying the Number of Existing Resources
Domain Names
Querying the List of Domain Names
Creating a Domain Name
Querying a Domain Name
Modifying the Configurations of a Domain Name
Deleting a Domain Name
Certificate Management
Obtaining the Certificate List
Uploading a Certificate
Querying a Certificate
Changing the Name of a Certificate
Deleting a Certificate
Querying the Domain Name that A Certificate Secures
Protection Status and Domain Setup
Switching the WAF Mode
Connecting a Domain Name to WAF
Policies
Querying All Policies
Creating a Policy
Querying a Policy
Applying a Policy to Domain Names
Updating a Policy
Deleting a Policy
Blacklist and Whitelist Rules
Querying Blacklist and Whitelist Rules
Adding a Blacklist or Whitelist Rule
Deleting a Blacklist or Whitelist Rule
Querying a Blacklist or Whitelist Rule
Updating a Blacklist or Whitelist Rule
CC Attack Protection Rules
Querying CC Attack Protection Rules
Adding a CC Attack Protection Rule
Deleting a CC Attack Protection Rule
Querying a CC Attack Protection Rule
Updating a CC Attack Protection Rule
Precise Protection Rules
Querying Precise Protection Rules
Adding a Precise Protection Rule
Deleting a Precise Protection Rule
Querying a Precise Protection Rule
Updating a Precise Protection Rule
Data Masking Rules
Querying Data Masking Rules
Adding a Data Masking Rule
Deleting a Data Masking Rule
Querying a Data Masking Rule
Updating a Data Masking Rule
Web Tamper Protection Rules
Querying Web Tamper Protection Rules
Adding a Web Tamper Protection Rule
Deleting a Web Tamper Protection Rule
Querying a Web Tamper Protection Rule
Refreshing the Web Tamper Protection Rule Cache
False Alarm Masking Rules
Querying False Alarm Masking Rules
Adding a False Alarm Masking Rule
Deleting a False Alarm Masking Rule
Event Logs
Querying Event Logs
Querying Event Logs by ID
Querying Event Distribution
Querying Request Statistics and Attack Statistics in a Specified Time Range
Querying the Total Number of Attacks
Querying Top N Attack Source IP Addresses
Querying the Number of Attack Source IP Addresses
Querying the Total Number of Requests per Second
Querying the List of Event Log Files
Alarm Notification
Querying Alarm Notification Configurations
Updating Alarm Notification Configurations
Obtaining Option Details
Querying Event Type in Alarm Notifications
Querying the Source IP Header
Appendix
Status Codes
Error Codes
Change History
User Guide
Introduction
WAF
Product Advantages
Application Scenarios
Accessing and Using WAF
How to Access WAF
How to Use WAF
Related Services
User Permissions
Getting Started
Overview
Creating a Domain Name
Connecting a Domain Name to WAF
Testing WAF
Certificate Management
Uploading a Certificate
Deleting a Certificate
Domain Management
Viewing Basic Information
Editing Domain Information
Enabling WAF Protection
Disabling WAF Protection
Setting the Bypassed Mode
Deleting a Domain Name
Rule Configurations
Enabling Basic Web Protection
Configuring CC Attack Protection Rules
Configuring Precise Protection Rules
Configuring Blacklist or Whitelist Rules
Configuring Web Tamper Protection Rules
Configuring False Alarm Masking Rules
Configuring Data Masking Rules
Policy Management
Creating a Policy
Applying a Policy to Your Domain Names
Dashboard
Event Management
Handling False Alarms
Downloading Events Data
Enabling Alarm Notification
FAQs
General
Which OSs Does WAF Support?
Which Web Service Frameworks Does WAF Support?
What Protection Policies Does WAF Support?
Can WAF Protect a Private IP Address?
Which Layer Does WAF Provides Protection At?
Can WAF Protect HTTPS Services?
When Is Cookie Used to Identify Users?
Operation-related
What Should I Do If the DNS Status Is Unconfigured?
How Do I Obtain the Real IP Address of a Web Visitor After WAF Is Enabled?
How Do I Troubleshoot 500/502/504 Errors?
Change History
Documents Download
Video Tutorial
waf
Help Center
Web Application Firewall
User Guide
Domain Management
Domain Management
Viewing Basic Information
This section describes how to view basic information about a protected domain name.
Editing Domain Information
This section describes how to
edit information about a domain name
.
Enabling WAF Protection
This section describes how to
enable WAF protection
.
Disabling WAF Protection
This section describes how to disable WAF protection.
Setting the Bypassed Mode
This section describes how to set the bypassed mode whereby requests are sent directly to the backend server without passing through WAF.
Deleting a Domain Name
This section describes how to delete a domain name. | https://docs.otc.t-systems.com/en-us/usermanual/waf/waf_01_0067.html | 2019-08-17T15:43:41 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.otc.t-systems.com |
Retrieve events from indexes
You have always been able to create new indexes and manage where you want to store your data. Additionally, when you have data split across different indexes, you can search multiple indexes at once, using the
index field.
Specify one or multiple indexes to search
The Splunk administrator can set the default indexes that a user searches. Based on the roles and permissions, the user might have access to one or many indexes. For example the user might be able to only search main or all public indexes. The user can then specify a subset of these indexes, either an individual index or multiple indexes, to search. For more information about setting up users and roles, see "About users and roles" in Securing Splunk Enterprise.
For more information about managing your indexes and setting up multiple indexes, see the "About managing indexes" in the Managing Indexers and Clusters manual.
Control index access using Splunk Web
1. Navigate to Manager > Access controls > Roles.
2. Select the role that the User has been assigned to.
- On the bottom of the next screen you'll find the index controls.
3.)
Not finding the events you're looking for?
When you add an input, the input gets added relative to the app you're in. Some apps write input data to their own specific index (for example, the Splunk App for Unix and Linux uses the 'os'! | https://docs.splunk.com/Documentation/Splunk/7.0.9/Search/Searchindexes | 2019-08-17T15:06:46 | CC-MAIN-2019-35 | 1566027313428.28 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
>>
Lstewart splunk, Splunker
Thank you very much for your document update.
I have one more question, please.
Could you teach me what is actually "the queue on the index processor"?
For example, what stage/queue on <> and <>?
Tkomatsubara and Parkyonsu
The choice of wording is unfortunate. By "drop events" we do not mean the events are removed. What we mean is that some events will not be considered for the search. I have corrected the wording above to this:
In cases where the search head can't keep up with the search peer, the queue on the index processor will cease to flag events for the search. However, the events will have a sequence number that you can use to tell when and how many events were omitted from search consideration.
Could you answer or update documents to the below questions from "Tkomatsubara splunk, Splunker"?
>In cases where the search head can't keep up with the search peer, the queue on the index processor will drop events.
Drop events? What does this mean? Which stage/queue are those events dropped?
Can you make it clear since this is very important.
>Indexed real-time searches
If this "Indexed real-time searches" used, can we avoid this data "drop" issue?
Parkyongsu
Thank you for the followup question. None of the queues mentioned in the wiki pages that you posted in your comment are related to the realtime queue mentioned in the documentation. I sent you an email with the details. | https://docs.splunk.com/Documentation/Splunk/7.2.6/Search/Realtimeperformanceandlimitations | 2019-08-17T15:10:50 | CC-MAIN-2019-35 | 1566027313428.28 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
The SQL_DESC_CASE_SENSITIVE descriptor record field contains SQL_TRUE if the column or parameter is treated as case-sensitive for collations and comparisons or if it is a noncharacter column.
When extended statement information is not available, the value of the SQL_DESC_CASE_SENSITIVE descriptor field will always be SQL_TRUE for a character column, regardless of the column definition (“CASESPECIFIC” or “NOT CASESPECIFIC”).
When extended statement information is available, the value of the SQL_DESC_CASE_SENSITIVE descriptor field matches the column definition; for example, the value is SQL_TRUE if the column is defined as “CASESPECIFIC” and the value is SQL_FALSE if the column is defined as “NOT CASESPECIFIC”. | https://docs.teradata.com/reader/pk_W2JQRhJlg28QI7p0n8Q/BLHaBgftPjZXqGFVLRW0RA | 2019-08-17T15:45:44 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.teradata.com |
No Fault Divorce Orange County California
What is a no fault divorce? Well, in some states, in order to get a divorce, you must allege some wrongdoing on the part of the person you are trying to divorce. A common ground for divorce in a “fault” divorce state is adultry, and the person making the allegations must prove the grounds for… Read More » | http://divorce-docs.com/tag/california-divorce-process/ | 2019-08-17T14:32:58 | CC-MAIN-2019-35 | 1566027313428.28 | [] | divorce-docs.com |
Contents
Speed Of Accept (seconds) Report
This page describes how you can use the (Queues folder) Speed Of Accept (seconds) Report to understand how long interactions waited in queue before being accepted.
Understanding the Speed Of Accept (seconds) Report
This report provides summarized performance information about the delays that are associated with long-lasting interactions that were accepted or pulled from the specified queue, providing both percentages and numbers of interactions that were accepted or pulled by service time interval. This report is most useful for media types for which contact center responses are expected to be fast, such as voice and chat.
The report shows the number of interactions that were accepted within each of 10 time buckets, and the percentages of interactions that were accepted in these buckets relative to the total number of interactions that were accepted from the queue. The 10th bucket is defined by a report variable (Accepted Agent ST1 - ST10) that amalgamates the first through 10th service time intervals. The Accepted Agent STI variable amalgamates all service time intervals.
This report reflects distribution from the selected mediation DNs only. The report does not reflect:
- the customer’s overall wait time
- the durations that interactions spent queued at other unselected queue resources that the interactions may have passed through before being distributed from the mediation DN(s) provided in this report.
To get a better idea of what this report looks like, view sample output from the report:
HRCXISpdOfAccptSecondsReport.pdf
The following tables explain the prompts you can select when you generate the report, and the metrics that are represented in the report:
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/PSAAS/latest/RPRT/HRCXISpdOfAccptScnds | 2019-08-17T15:09:41 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.genesys.com |
8.5.206.06
Genesys Mobile Services Release Notes
Helpful Links
Releases Info
Product Documentation
Genesys Products
What's New
This release contains the following new features and enhancements:
- Support for Oracle Linux 7 operating system. See the Genesys Mobile Services page in the Genesys Supported Operating Environment Reference Guide for more detailed information and a list of all supported operating systems.
Resolved Issues
This release contains the following resolved issues:
Scheduling a callback at the top of hour slots no longer results in multiple ORS call executions. Previously, in releases from 8.5.203.02 to 8.5.206.05, scheduling a callback at the top of hour slots may have resulted in multiple ORS call executions and the same customer may have been called more than once for the same callback. (GMS-7265)
GMS installation no longer stops after displaying OS version 6 is not supported when installing on RHEL or CentOS. (PROD-11779)
Upgrade Notes
No special procedure is required to upgrade to release 8.5.206.06.
This page was last modified on December 18, 2018, at 10:22.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/RN/latest/gms85rn/gms8520606 | 2019-08-17T14:40:36 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.genesys.com |
High Availability in Azure In-Role Cache
Important
Microsoft recommends all new developments use Azure Redis Cache. For current documentation and guidance on choosing an Azure Cache offering, see Which Azure Cache offering is right for me?.
Architecture.
Note
Note that even when high availability is disabled, the cache cluster attempts to preserve data during planned shutdowns, such as a reboot. In this scenario, the cache cluster attempts to transfer cached items to other servers before the shutdown. However, depending on the amount of data to transfer, this graceful shutdown is not guaranteed to complete. Also, unlike high availability, the data is not preserved during unexpected shutdowns.
Considerations.
Important
By definition, the use of high availability multiplies the amount of required memory for each cached item by two. Consider this memory impact during capacity planning tasks. For more information, see Capacity Planning Considerations for Azure In-Role Cache.
To Enable
Concepts
In-Role Cache Features in Azure Cache | https://docs.microsoft.com/en-us/previous-versions/azure/azure-services/hh914162(v=azure.100) | 2019-08-17T14:55:33 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.microsoft.com |
.
Selecting an Upgrade Method
You can use various methods to upgrade SQL Server 2005 packages. For some of these methods, the upgrade is only temporary. For others, the upgrade is permanent. The following table describes each of these methods and whether the upgrade is temporary or permanent.
Understanding Package Upgrade Results.
Note
To identify which packages have the issues listed in this table, run Upgrade Advisor. For more information, see Using Upgrade Advisor to Prepare for Upgrades.
See Also
Concepts
Change History | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/cc280546%28v%3Dsql.105%29 | 2019-08-17T15:09:29 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.microsoft.com |
For an ANSI application (that is, compiled without UNICODE defined), the Driver Manager on the UNIX OS will not convert:
- SQLColAttribute calls into SQLColAttributeW calls in ODBC Driver for Teradata
- Output parameters from UTF-8 back to the application code page
Because of this, the output parameters from SQLColAttribute are delivered back to the ANSI application in the internal character set used by the driver. If the internal character set is different from the application code page, the application receives data back from SQLColAttribute in a different character set from what was expected.
This is a problem if, for example, an ANSI application using ISO 8859-1 requests non-ASCII meta data (such as a column name with Danish characters) and the session character set is UTF-8. The application gets the column name back in UTF-8. In general, if an ANSI application uses a Unicode session character set, it gets data back from SQLColAttribute in UTF-8, regardless of the application code page.
To avoid this problem, use the old SQLColAttributes function (with an 's' at the end). | https://docs.teradata.com/reader/pk_W2JQRhJlg28QI7p0n8Q/78kssHCQUqWQIJv3yfYUWA | 2019-08-17T14:58:58 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.teradata.com |
ACL: repeatedly failed to set monitor status
The ACL process attempted to set a monitor by sending a request to the HICOM. This request was rejected by HICOM a number of times because the HICOM replied that it was busy. The ACL process gives up after a number of attempts to set the monitor.
Check that ACL is running correctly on the HICOM. Check to see if any other ACL links to the HICOM are creating an excessive amount of traffic on the ACL link.
Yellow
Log, System Monitor, Alertable | http://docs.blueworx.com/BVR/InfoCenter/V6.1/help/topic/com.ibm.wvraix.probdet.doc/dtxprobdet512.html | 2019-08-17T14:32:47 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.blueworx.com |
Cloud Backup requires the calculation of checksums. This helps ensure that your backups are efficient and complete as fast as possible.
The speed counter is an average counter and will be most accurate when backup is in the middle of a folder with similar file types. Smaller files will report slower speed since they finish faster and larger files report faster speeds. There is a greater disparity between this for users with fast upload connections.
There are sites like Speedtest to check your upload capability. The upload amount reported will help you establish an idea of what your connection is able to do. There are many factors in a connection that effect speed and this is just one of many ways to learn about your connection.
Below is an example of how Cloud Backup looks when calculating checksums.
Once in the middle of file transfer, you will something like this:
| https://docs.infrascale.com/calculating-checksums-in-the-cloud-backup-upload-dialog | 2019-08-17T15:00:05 | CC-MAIN-2019-35 | 1566027313428.28 | [array(['https://image.prntscr.com/image/JJmi_AvDTrahn2txFTv5tQ.png', None],
dtype=object)
array(['https://image.prntscr.com/image/JJmi_AvDTrahn2txFTv5tQ.png', None],
dtype=object) ] | docs.infrascale.com |
Add a "get a temporary render texture array" command.
This creates a temporary render texture array with given parameters, and sets it up as a global shader property with nameID. Use Shader.PropertyToID to create the integer name.
Release the temporary render texture array using ReleaseTemporaryRT, passing the same nameID. Any temporary textures that were not explicitly released will be removed after camera is done rendering, or after Graphics.ExecuteCommandBuffer is done.
After getting a temporary render texture array,. | https://docs.unity3d.com/es/2018.2/ScriptReference/Rendering.CommandBuffer.GetTemporaryRTArray.html | 2019-08-17T14:46:08 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.unity3d.com |
PyMOL4RNA¶
PyMOL Drawing¶
rna_tools.tools.pymol_drawing.pymol_drawing.
draw_circle(x, y, z, r=8.0, cr=1.0, cg=0.4, cb=0.8, w=2.0)[source]¶
Create a CGO circle
- PARAMS
- x, y, z
- X, Y and Z coordinates of the origin
- r
- Radius of the circle
- cr, cg, cb
- Color triplet, [r,g,b] where r,g,b are all [0.0,1.0].
- w
- Line width of the circle
- RETURNS
- the CGO object (it also loads it into PyMOL, too).
rna_tools.tools.pymol_drawing.pymol_drawing.
draw_circle_selection(selName, r=None, cr=1.0, cg=0.4, cb=0.8, w=2.0)[source]¶
circleSelection – draws a cgo circle around a given selection or object
- PARAMS
- selName
- Name of the thing to encircle.
- r
- Radius of circle. DEFAULT: This cript automatically defines the radius for you. If you select one atom and the resultant circle is too small, then you can override the script’s calculation of r and specify your own.
- cr, cg, cb
- red, green and blue coloring, each a value in the range [0.0, 1.0]
- RETURNS
- The circle object.
rna_tools.tools.pymol_drawing.pymol_drawing.
draw_dist(54.729, 28.9375, 41.421, 55.342, 35.3605, 42.745)[source]¶
rna_tools.tools.pymol_drawing.pymol_drawing.
draw_vector(x1, y1, z1, x2, y2, z2)[source]¶
Install PyMOL plugin to view the interactions with PyMOL:
run <path>rna-tools/tools/pymol_drawing/pymol_dists.py
and type:
draw_dists([[29, 41], [7, 66], [28, 42], [51, 63], [50, 64], [2, 71], [5, 68], [3, 70], [31, 39], [4, 69], [6, 67], [12, 23], [52, 62], [30, 40], [49, 65], [27, 43], [11, 24], [1, 72], [10, 25], [15, 48], [53, 61], [19, 56], [13, 22], [36, 37], [18, 19], [22, 46], [35, 73], [32, 38], [9, 13], [19, 20], [18, 20], [54, 60], [9, 23], [34, 35], [36, 38], [53, 54], [20, 56], [9, 12], [26, 44], [18, 55], [54, 61], [32, 36]])
Install¶
Open your ~/.pymolrc and set up following variables as you need:
# rna-tools RNA_TOOLS="/Users/magnus/work-src/rna-tools" EXECUTABLE="/bin/zsh" # set up your shell, usually /bin/bash or /bin/zsh SOURCE="source ~/.zshrc" # set up the path to the file where you keep your shell variables CLARNA_RUN="/Users/magnus/work-src/clarna_play/clarna_run.py" # if you want to run clarna_run.py set up the path sys.path.append('/Users/magnus/work-src/rna-tools') run ~/work-src/rna-tools/rna_tools/tools/PyMOL4RNA/PyMOL4RNA.py run ~/work-src/rna-tools/rna_tools/tools/pymol_drawing/pymol_drawing.py run ~/work-src/rna-tools/rna_tools/tools/rna_filter/pymol_dists.py
The plugins have been tested with MacPyMOL version 1.7.4.5 Edu. | https://rna-tools.readthedocs.io/en/latest/pymol4rna.html | 2019-08-17T15:40:14 | CC-MAIN-2019-35 | 1566027313428.28 | [array(['_images/pymol_dists.png', '_images/pymol_dists.png'], dtype=object)] | rna-tools.readthedocs.io |
-
- Invite a BBM contact to join a group
- Add a member by scanning a barcode
- Display the group barcode on your smartphone
- Invite a member to become a BBM contact
- Delete a member from a group
- Leave a group
Next topic: Invite a BBM contact to join a group
Previous topic: Change group options
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/41228/Members_title_881182_11.jsp | 2014-04-16T07:54:54 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.blackberry.com |
Jikes RVM includes provisions to run unit tests as well as functional and performance tests. It also includes a number of actual tests, both unit and functional ones.
Unit Tests
Jikes RVM makes writing simple unit tests easy. Simply give your JUnit 4 tests a name ending in
Test and place test sources under
rvm/test-src. The tests will be picked up automatically.
The tests are then run on the bootstrap VM, i.e. the JVM used to build Jikes RVM. You can also configure the build to run unit tests on the newly built Jikes RVM. Note that this may significantly increase the build times of slow configurations (e.g. prototype and protype-opt).
If you are developing new unit tests, it may be helpful to run them on an existing Jikes RVM image. This can be done by using the Ant target
unit-tests-on-existing-image. The path for the image is determined by the usual properties of the Ant build.
Functional and Performance Tests
See External Test Resources for details or downloading prerequisites for the functional tests. The tests are executed using an Ant build file and produce results that conform to the definition below. The results are aggregated and processed to produce a high level report defining the status of Jikes RVM.
The testing framework was designed to support continuous and periodical execution of tests. A "test-run" occurs every time the testing framework is invoked. Every "test-run" will execute one or more "test-configuration"s. A "test-configuration" defines a particular build "configuration" (See Configuring the RVM for details) combined with a set of parameters that are passed to the RVM during the execution of the tests. i.e. a particular "test-configuration" may pass parameters such as
-X:aos:enable_recompilation=false -X:aos:initial_compiler=opt -X:irc:O1 to test the Level 1 Opt compiler optimizations.
Every "test-configuration" will execute one or more "group"s of tests. Every "group" is defined by a Ant build.xml file in a separate sub-directory of
$RVM_ROOT/testing/tests. Each "test" has a number of input parameters such as the classname to execute, the parameters to pass to the RVM or to the program. The "test" records a number of values such as execution time, exit code, result, standard output etc. and may also record a number of statistics if it is a performance test.
The project includes several different types of _test run_s and the description of each the test runs and their purpose is given in Test Run Descriptions.
Ant Properties
There is a number of ant properties that control the test process. Besides the properties that are already defined in Building the RVM the following properties may also be specified.
Defining a test-run
A test-run is defined by a number of properties located in a property file located in the build/test-runs/ directory.
The property test.configs is a whitespace separated list of test-configuration "tags". Every tag uniquely identifies a particular test-configuration. Every test-configuration is defined by a number of properties in the property file that are prefixed with test.config.<tag>. and the following table defines the possible properties.
The simplest test-run is defined in the following figure. It will use the build configuration "prototype" and execute tests in the "basic" group.
test.configs=prototype test.config.prototype.tests=basic
The test process also expands properties in the property file so it is possible to define a set of tests once but use them in multiple test-configurations as occurs in the following figure. The groups basic, optests and dacapo are executed in both the prototype and prototype-opt test\configurations.
test.set=basic optests dacapo test.configs=prototype prototype-opt test.config.prototype.tests=${test.set} test.config.prototype-opt.tests=${test.set}
Test Specific Parameters
Each test can have additional parameters specified that will be used by the test infrastructure when starting the Jikes RVM instance to execute the test. These additional parameters are described in the following table.
To determine the value of a test specific parameters, the following mechanism is used;
- Search for one of the the following ant properties, in order.
- test.config.<build-configuration>.<group>.<test>.<parameter>
- test.config.<build-configuration>.<group>.<parameter>
- test.config.<build-configuration>.<parameter>
- test.config.<build-configuration>.<group>.<test>.<parameter>
- test.config.<build-configuration>.<group>.<parameter>
- If none of the above properties are defined then use the parameter that was passed to the <rvm> macro in the ant build file.
- If no parameter was passed to the <rvm> macro then use the default value which is stored in the "Default Property" as specified in the above table. By default the value of the "Default Property" is specified as the "Default Value" in the above table, however a particular build file may specify a different "Default Value".
Excluding tests
Sometimes it is desirable to exclude tests. The test exclusion may occur as the test is known to fail on a particular target platform, build configuration or maybe it just takes too long. To exclude a test, you must define the test specific parameter "exclude" to true either in .ant.properties or in the test-run properties file.
i.e. At the time of writing the Jikes RVM does not fully support volatile fields and as a result th test named "TestVolatile" in the "basic" group will always fail. Rather than being notified of this failure we can disable the test by adding a property such as "test.config.basic.TestVolatile.exclude=true" into test-run properties file.
Executing a test-run
The tests are executed by the Ant driver script test.xml. The test-run.name property defines the particular test-run to execute and if not set defaults to "sanity". The command
ant -f test.xml -Dtest-run.name=simple executes the test-run defined in build/test-runs/simple.properties. When this command completes you can point your browser at
${results.dir}/tests/${test-run.name}/Report.html to get an overview on test run or at
${results.dir}/tests/${test-run.name}/Report.xml for an xml document describing test results. | http://docs.codehaus.org/pages/viewpage.action?pageId=233050090 | 2014-04-16T07:55:32 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.codehaus.org |
Action and Command Execution
You can send commands to manipulate things from mobile apps. The things return the result of the command execution and error information. This information will be notified to the mobile apps via the push notification.
The next figure illustrates the basic flow. See the related topics in the development guides for Android, iOS, JavaScript, REST API, and Thing for more details on the implementation.
If you are using a gateway, you can send commands to end nodes. You cannot send commands to the gateway.
1. Sending Command from mobile app
The mobile app, through its UI, gets the command for manipulating the target thing. The command is sent to Thing Interaction Framework. Here, the command can have some parameters like "turnPower:true".
2. Receiving and Executing Commands by Thing
Thing Interaction Framework uses the MQTT push notification to send the command to the target thing.
When the thing receives the push notification, the SDK automatically calls the action handler. By writing the program for controlling the thing hardware in the action handler, you can manipulate the thing based on the given parameters.
3. Sending Command Result from Thing
The thing will report the command result to Thing Interaction Framework. The command result summarizes if the command execution was a success or not.
4. Notifying mobile app by Push Notification
Thing Interaction Framework sends a push notification to tell the mobile app that the command was executed. At this point, only the command ID will be notified.
The push notification can be sent via the available push notificatin networks, depending on the mobile app platform.
If the owner is a pseudo user, the command ID notification is sent only to the device that has sent the command. No push notification will be sent to other devices that share the thing. If the owner is a normal user, the command ID notification is sent to all the devices associated with the owner by "device installation" (Android, iOS, REST).
5. Getting Command Result
The push notification sent to the mobile app only contains the command ID. The mobile app can use this command ID to get the command and its result.
By getting the command result, the user will be able to know if the command execution was a success and if there was any error.
Command and Actions
Command Structure
As illustrated in the figure below, a command is composed of an array of one or more actions. The order in the array has the meaning; the thing that receives the command will execute the actions in the order they are listed in the array.
Each action can have a parameter in the JSON format. You can use any JSON format. You can, for example, use the JSONObject and JSONArray to define multiple fields (e.g., the RGB value of the LED light).
The following figure shows an example of data sent to Thing Interaction Framework when we send the command "Turn the power of the air conditioner on, set the preset temperature to 25 degrees, and set the fan speed to 5". The figure illustrates the data send to Thing Interaction Framework.
A command is composed of multiple actions to accommodate various situations. A set of necessary actions depends on the situations. When you want to turn on the air conditioner, for example, you will need a set of three actions ("Power on/off", "Set the Preset Temperature", and "Set Fan Speed"). When you want to turn off the air conditioner, however, you will need only one action ("Power on/off"). When designing actions, you will need to define not only conceptual sets of actions but also sets of actions which are appropriate for situations where those are sent to things as a command.
Also, make sure to design actions based on the capability of the target thing. On the thing, the callback function (action handler) will be called per action when the command is received. If the restriction of the thing hardware forces you to handle parameters for multiple operations in one shot, you might want to put these operations into one action to make the implementation easy.
You will define the actions you've designed as the thing schema.
Action Result
Each action has the action result. An action result is composed of the success/failure status and the error message.
In the flow diagram at the top of this page, we've shown the command result as the independent data. Actually, there is no class for the command result in the SDK. The following figure illustrates the actual data structure. As you can see in the figure, the command will have an array of the commands and their command results.
You cannot put any parameters on the command result. Please use the State if you want to pass some values from the thing to the mobile apps.
Getting the Command
When a command is executed by the request from a mobile app or is fired by the Trigger, the command is first registered on Thing Interaction Framework.
A command ID is assigned to the registered command. The command ID becomes a key for getting the latest command status. In particular, a mobile app can use the command ID to get the command result after the thing returns the execution result.
There are two ways of getting the command:
- Get a command with its command ID.
- Get all registered commands.
Please read the Thing-IF SDK development guides for the details.
Command Details
Command details can be registered when a command is sent to Thing Interaction Framework. Such registered details can be fetched by the feature of getting command.
Thing Interaction Framework does not interpret command details. You can register any information such as strings or icons to be displayed on the user interface of your mobile app.
You can register these items as command details. All the fields are optional. These details will be associated with the command and saved as shown in the below figure.
title
Command title. You can register any string of up to 50 characters.
description
Command description. You can register any string of up to 200 characters.
metadata
Command metadata. You can specify data which will be used by the application in any JSON format.
| https://docs.kii.com/en/functions/thingifsdk/thingifsdk/non_trait/model/actions_commands/ | 2021-02-24T22:41:23 | CC-MAIN-2021-10 | 1614178349708.2 | [array(['01.png', None], dtype=object)
array(['02.png', None], dtype=object)
array(['03.png', None], dtype=object)
array(['04.png', None], dtype=object)] | docs.kii.com |
Activity
Overview
Activity is an API which records a user’s activity history. The recorded activity is displayed in time series on the user’s friends’ My Page.
Activity Object Fields
The following is a list of Activity object fields which are Mobage open platform-compliant.
MediaItem Object Fields
API Request
HTTP Method
Proxy Model
Trusted Model
Endpoint URL
- Sandbox Environment{guid}/{selector}/{appid}
- Production Environment{guid}/{selector}/{appid}
URI Template Parameters
guid
Only "@me" may be designated in the guid parameter.
selector
Only "@self" may be designated in the selector parameter.
appid
Only "@app" may be designated in the appid parameter.
Query Parameters
None.
OAuth Signed Request
This function allows secure transfers of data using OAuth Request Body Hash.
A xoauth_requestor_id must be included as either an authorized header or a query parameter.
The following values are designated by mode:
Proxy Mode
guid of viewer (value of opensocial_viewer_id sent from the Gadget Server)
API Responses
API response codes are selected from the following list:
Notes
- HTML tags may not be included in title or body segments.
- Do not allow users to freely enter title or body inputs.
- Limit the title to a maximum of 42 single-byte or 21 double-byte characters.
- Timestamp and the short game name will be in the header of the actual title display.
- Limit the body to a maximum of 280 single-byte or 140 double-byte characters.
- Limit Activity recording for a given user to one per minute. The API will return a 503 Service Unavailable message if requests exceeding the limit are sent.
- Sending Activity using the Activity API has a priority of LOW.
- At LOW priority, the Activity will be displayed only for the subgroup of friends who have the application installed.
- At LOW priority, the Activity will not be displayed at the top of the user’s My Page, but only in the "See More" section of the "Friends’ Game Status".
- If you want to send Activity at HIGH priority, use the Activity Service and not the Activity API.
- When specifying multiple mediaItems in series, only the first image in the series will be displayed on mobile devices.
- The Activity API will return a 403 response code if the user does not have the application in question installed.
XML Schema Part 2: Datatypes Second Edition
URI Template draft-gregorio-uritemplate-03
OAuth Request Body Hash 1.0 Draft 4
Revision History
- 03/01/2013
- Initial release. | https://docs.mobage.com/display/JPSPBP/Activity | 2021-02-25T00:01:10 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.mobage.com |
From Step 2 (Review) of the Decommission Site page, you can review the storage usage for each Storage Node at the site. Then, you must update ILM rules that refer to the site in an Erasure Coding profile or a storage pool.
If either of these conditions exist, you must update ILM rule and policy settings as described in the following steps. | https://docs.netapp.com/sgws-114/topic/com.netapp.doc.sg-maint/GUID-1524BC8C-043C-4CE8-B9C7-F72831DE7019.html | 2021-02-25T00:36:30 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.netapp.com |
15.1.1.3.39. TimeBasedFilterQosPolicy¶
- class
eprosima::fastdds::dds
::
TimeBasedFilterQosPolicy: public eprosima::fastdds::dds::Parameter_t, public eprosima::fastdds::dds::QosPolicy¶
Filter that allows a DataReader to specify that it is interested only in (potentially) a subset of the values of the data. The filter states that the DataReader does not want to receive more than one value each minimum_separation, regardless of how fast the changes occur. It is inconsistent for a DataReader to have a minimum_separation longer than its Deadline period.
- Warning
This QosPolicy can be defined and is transmitted to the rest of the network but is not implemented in this version.
- Note
Mutable Qos Policy
Public Functions
Public Members
- fastrtps::Duration_t
minimum_separation¶
Minimum interval between samples. By default, c_TimeZero (the DataReader is interested in all values) | https://fast-dds.docs.eprosima.com/en/latest/fastdds/api_reference/dds_pim/core/policy/timebasedfilterqospolicy.html | 2021-02-24T22:52:11 | CC-MAIN-2021-10 | 1614178349708.2 | [] | fast-dds.docs.eprosima.com |
{"title":"Release note 08.22.16","slug":"release-note-082216","body":"This week, we have a few exciting features to announce! Read on for more details.\n\n##Set metadata for multiple files upon upload\nAdding metadata to files just got easier. You can now import tabular metadata formats to the CGC and extract and assign metadata based on your manifest file. Learn more about [this feature](doc:set-metadata-using-the-command-line-uploader#section-set-metadata-for-multiple-files-using-a-manifest-file) on our Knowledge Center.\n\n<div align=\"right\"><a href=\"#top\">top</a></div>\n\n##Improved access to CCLE data on the CGC\nWe recently made data from [Cancer Cell Line Encyclopedia (CCLE)]() available on the CGC via the [CCLE public project](doc:ccle). CCLE contains 1,285 open access BAM files produced by RNA-Sequencing, Whole Genome Sequencing, and Whole Exome Sequencing analyses on cancer cell lines. Now, you can browse CCLE and TCGA datasets through the [Data Overview](doc:data-overview). We’ve also enabled searching for and accessing specific files in CCLE and TCGA via the [Data Browser](doc:the-data-browser). Then, you can query that dataset by metadata, search by ID, and save or access queries. For instance, by choosing the CCLE dataset, you can now copy BAM files, try our workflows, and perform analyses right away.676eaa8f760e0045890a","changelog":[],"createdAt":"2016-08-22T20:58:22.525Z","project":"55faf11ba62ba1170021a9a7","user":{"name":"Emile Young","username":"","_id":"5613e4f8fdd08f2b00437620"},"__v":0,"metadata":{"title":"","description":"","image":[]}} | https://docs.cancergenomicscloud.org/blog/release-note-082216 | 2021-02-24T23:41:25 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.cancergenomicscloud.org |
This bot does not play music. Eventcord manages the queue for karaoke and can automatically mute and unmute users to ensure a clear voice channel while someone is preforming. However, if you do want to play music during karaoke, Eventcord can work in conjunction with other bots, such as Rythm, to play music. Eventcord will not mute bots in the voice channel.
See troubleshooting for more detail.
You can set your own event message by typing your message after the start command. Here is an example:
;start Join to have some fun!
Your event message can be up to 280 characters.
Visit eventcord.xyz/invite to invite Eventcord to your server!
You can find a list of commands here:
Our Discord server provides quick, reliable support. Join the server here. Just join, ask a question, wait for a response.
Click the link below to see the available languages:
To change your guild or user language, see configuration.
What do these mean and why does Eventcord want them? You can find out below!
In the last step of the setup command, the bot will check for the permissions listed above (see assisted setup)
Have questions about these permissions? Ask in our Discord server by visiting go.eventcord.xyz/discord | https://docs.eventcord.xyz/faq | 2021-02-25T01:29:14 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.eventcord.xyz |
IAudioSessionEvents::OnSessionDisconnected method (audiopolicy.h)
The OnSessionDisconnected method notifies the client that the audio session has been disconnected.
Syntax
HRESULT OnSessionDisconnected( AudioSessionDisconnectReason DisconnectReason );
Parameters
DisconnectReason
The reason that the audio session was disconnected. The caller sets this parameter to one of the AudioSessionDisconnectReason enumeration values shown in the following table.
For more information about WTS sessions, see the Windows SDK documentation.
Return value
If the method succeeds, it returns S_OK. If it fails, it returns an error code.
Remarks
When disconnecting a session, the session manager closes the streams that belong to that session and invalidates all outstanding requests for services on those streams. The client should respond to a disconnection by releasing all of its references to the IAudioClient interface for a closed stream and releasing all references to the service interfaces that it obtained previously through calls to the IAudioClient::GetService method.
Following disconnection, many of the methods in the WASAPI interfaces that are tied to closed streams in the disconnected session return error code AUDCLNT_E_DEVICE_INVALIDATED (for example, see IAudioClient::GetCurrentPadding). For information about recovering from this error, see Recovering from an Invalid-Device Error.
If the Windows audio service terminates unexpectedly, it does not have an opportunity to notify clients that it is shutting down. In that case, clients learn that the service has stopped when they call a method such as IAudioClient::GetCurrentPadding that discovers that the service is no longer running and fails with error code AUDCLNT_E_SERVICE_NOT_RUNNING.
A client cannot generate a session-disconnected event. The system is always the source of this type of event. Thus, unlike some other IAudioSessionEvents methods, this method does not have a context parameter.
For a code example that implements the methods in the IAudioSessionEvents interface, see Audio Session Events.
Requirements
See also
IAudioSessionEvents Interface | https://docs.microsoft.com/en-us/windows/win32/api/audiopolicy/nf-audiopolicy-iaudiosessionevents-onsessiondisconnected | 2021-02-24T23:39:04 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.microsoft.com |
OnCommand Workflow Automation (WFA) is a software solution that helps to automate storage management tasks, such as provisioning, migration, decommissioning, data protection configurations, and cloning storage. You can use WFA to build workflows to complete tasks that are specified by your processes. WFA supports ONTAP.
A workflow is a repetitive and procedural task that consists of sequential steps, including the following types of tasks:
Storage architects can define workflows to follow best practices and meet organizational requirements, such as the following:.
No license is required for using the OnCommand Workflow Automation server. | https://docs.netapp.com/wfa-42/topic/com.netapp.doc.onc-wfa-isg-rhel/GUID-15ADE53A-FA5C-4EAE-9432-D83841434BFF.html | 2021-02-25T00:35:12 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.netapp.com |
Copyright (c) 2017 Red Hat, Inc. This work is licensed under a Creative Commons Attribution 3.0 Unported License.
Nodepool Drivers¶
Storyboard:
Support multiple provider drivers in Nodepool, including static hosts.
Problem Description¶
As part of the Zuul v3 effort, it was envisioned that Nodepool would be expanded to support supplying static nodes in addition to the current support for OpenStack instances, as well as potentially nodes from other sources. The work to move the data structures and queue processing to ZooKeeper was expected to facilitate this. This specification relates both efforts, envisioning supplying static nodes as an implementation of the first alternative driver for Nodepool.
Proposed Change¶
There are many internal classes which will need to be changed to accomodate the additional level of abstraction necessary to support multiple drivers. This specification is intentionally vague as to exactly which should change, but instead lays out a high-level overview of what should be shared and where drivers should diverge in order to help guide implementation.
Nodepool’s current internal architecture is well suited to an extension to support multiple provider drivers. Because the queue processing, communication, and data storage all occur via ZooKeeper, it’s possible to create a component which fulfills Nodepool requests that is completely external to the current Nodepool codebase. That may prove useful in the future in the case of more esoteric systems. However, it would be useful for a wide range of Nodepool users to have built in support for not only OpenStack, but other cloud systems as well as static nodes. The following describes a method of extending the internal processing structure of Nodepool to share as much code between multiple drivers as possible (to reduce the maintenance cost of multiple drivers as well as the operational cost for users). Operators may choose to run multiple providers in a single process for ease of deployment, or they can split providers across multiple processes or hosts as needed for scaling or locality needs.
The nodepool-launcher daemon is internally structured as a number of
threads, each dedicated to a particular task. The main thread,
implemented by the
NodePool class starts a
PoolWorker for each
provider-pool entry in the config file. That
PoolWorker is
responsible for accepting and fulfilling requests, though the
specifics of actually fulfilling those requests are handled by other
classes such as
NodeRequestHandler.
We should extend the concept of a
provider in Nodepool to also
include a driver. Every provider should have a driver and also a
pools section, but the rest of the provider configuration (clouds,
images, etc.) should be specific to a given driver. Nodepool should
start an instance of the
PoolWorker class for every provider-pool
combination in every driver. However, the OpenStack-specific
behavior currently encoded in the classes utilized by
PoolWorker
should be abstracted so that a
PoolWorker can be given a different
driver as an argument and use that driver to supply nodes.
When nodes are returned to nodepool (their locks having been
released), the
CleanupWorker currently deletes those nodes. It
similarly should be extended to recognize the driver which supplied
the node, and perform an appropriate action on return (in the case of
a static driver, the appropriate action may be to do nothing other
than reset the node state to
ready).
The configuration syntax will need some minor changes:
providers: - name: openstack-public-cloud driver: openstack cloud: some-cloud-name diskimages: - name: fedora25 pools: - name: main max-servers: 10 labels: - name: fedora25-small min-ram: 1024 diskimage: fedora25 - name: static driver: static pools: - name: main nodes: - name: static01.example.com host-key: <SSH host key> labels: static - name: static02.example.com host-key: <SSH host key> labels: static
Alternatives¶
We could require that any further drivers be implemented as separate processes, however, due to the careful attention paid to the Zookeeper and Nodepool protocol interactions when implementing the current fulfillment algorithm, prudence suggests that we at least provide some level of shared implementation code to avoid rewriting the otherwise boilerplate node request algorithm handling. As long as we’re doing that, it is only a small stretch to also facilitate multiple drivers within a single Nodepool launcher process so that running Nodepool does not become unecessarily complicated for an operator who wants to use a cloud and a handful of static servers.
Implementation¶
Gerrit Branch¶
Nodepool and Zuul are both branched for development related to this spec. The “master” branches will continue to receive patches related to maintaining the current versions, and the “feature/zuulv3” branches will receive patches related to this spec. The .gitreview files have been be updated to submit to the correct branches by default.
Work Items¶
Abstract Nodepool request handling code to support multiple drivers
Abstract Nodepool provider management code to support multiple drivers
Collect current request handling implementation in an OpenStack driver
Extend Nodepool configuration syntax to support multiple drivers
Implement a static driver for Nodepool
Security¶
There is no access control to restrict under what conditions static nodes can be requested. It is unlikely that Nodepool is the right place for that kind of restriction, so Zuul may need to be updated to allow such specifications before it is safe to add sensitive static hosts to Nodepool. However, for the common case of supplying specific real hardware in a known test environment, no access control is required, so the feature is useful without it.
Dependencies¶
This is related to the ongoing Zuul v3 work and builds on the completed Zookeeper Workers work in Nodepool. | https://docs.opendev.org/opendev/infra-specs/latest/specs/nodepool-drivers.html | 2021-02-25T00:18:32 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.opendev.org |
Configuring File Searches on Target Systems
If InsightVM gains access to an asset’s file system by performing an exploit or a credentialed scan, it can search for the names of files in that system.
File name searching is useful for finding software programs that are not detected by fingerprinting. It also is a good way to verify compliance with policies in corporate environments that don't permit storage of certain types of files on workstation drives:
- Confidential information, such as patient file data in the case of HIPAA compliance
- Unauthorized software
The Security Console can read the metadata of these files, but it does not read nor retrieve file contents. You can view the names of scanned file names in the "File and Directory Listing" pane of a scan results page.
NOTE
File searching operations can take a considerable amount of time to complete. For this reason, we do not recommend file searching during routine network scanning. Instead, file searching is best implemented with ad hoc or manual scans.
How to Configure File Searching
You can configure file searching capabilities on new or existing custom scan templates.
To configure file searching on your scan template:
- In your Security Console, open the left menu and click the Administration tab.
- In the “Global and Console Settings” section, click manage next to the “Templates” label.
- Browse to the template that you want to configure.
- If the custom scan template that you want to configure already exists, browse to it and click the edit icon.
- If you want to create a new custom template from a built-in version, click the copy icon next to the desired template.
- On the “Scan Template Configuration” screen, click the File Searching tab.
- Click Add File Search. The “File Search Editor” window displays.
- Name your file search configuration. This name will identify the configuration in the “File Search Listing” table going forward.
- Select a pattern type. The following options are available:
- DOS wildcard pattern
- GNU regular expression
How are these patterns different?
The most important distinction between these two pattern types is how they evaluate character wildcards.
The DOS pattern supports two wildcards that function as stand-ins for allowable characters. These wildcards are as follows:
*- matches any combination of allowable characters
?- matches any single allowable character
Unlike the DOS variant, the GNU regular expression pattern only supports a single character wildcard, but it also includes several operators that change how that wildcard is matched. This character wildcard is as follows:
.- matches any single character
You can modify how this character behaves by appending one of the following operators to the wildcard (or to any other literals that you want to match):
*- matches zero or more of the preceding character. For example,
.*would match zero or more of any allowable character.
+- matches one or more of the preceding character. While this is similar to the
.*example,
.+would only match if at least one character was present.
?- matches the preceding character a single time, if it exists. This functionally makes the preceding character optional. For example,
.?would match any character a single time, but would not break the expression if it didn’t appear.
- Enter your search string in the provided field. Make sure that you observe the syntax requirements of your selected pattern type.
- Click Save when finished.
Your file search configuration should now display in the “File Search Listing” table. Click Save in the upper right corner of the “Scan Template Configuration” screen to apply your configuration for use with your next scan. | https://docs.rapid7.com/insightvm/configuring-file-searches-on-target-systems/ | 2021-02-24T23:21:39 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.rapid7.com |
.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
put-logging-options --logging-options <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--logging-options (structure)
The new values).
Shorthand Syntax:
roleArn=string,level=string,enabled=boolean,detectorDebugOptions=[{detectorModelName=string,keyValue=string},{detectorModelName=string,keyValue=string}]
JSON Syntax:
{ "roleArn": "string", "level": "ERROR"|"INFO"|"DEBUG", "enabled": true|false, "detectorDebugOptions": [ { "detectorModelName": "string", "key logging options
The following put-logging-options example sets or updates the AWS IoT Events logging options. If you update the value of any loggingOptions` field, it can take up to one minute for the change to take effect. Also, if you change the policy attached to the role you specified in the ``roleArn field (for example, to correct an invalid policy) it can take up to five minutes for that change to take effect.
aws iotevents put-logging-options \ --cli-input-json
Contents of logging-options.json:
{ "loggingOptions": { "roleArn": "arn:aws:iam::123456789012:role/IoTEventsRole", "level": "DEBUG", "enabled": true, "detectorDebugOptions": [ { "detectorModelName": "motorDetectorModel", "keyValue": "Fulton-A32" } ] } }
This command produces no output.
For more information, see PutLoggingOptions in the AWS IoT Events API Reference. | https://docs.aws.amazon.com/cli/latest/reference/iotevents/put-logging-options.html | 2021-02-25T00:36:22 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.aws.amazon.com |
This guide reflects the old console for Amazon SES. For information about the new console for Amazon SES, see the new Amazon Simple Email Service Developer Guide.
Obtaining your..
Requirement. Download or copy these credentials and store them in a safe place, as you cannot view or save your credentials after you dismiss rotate your SMTP credentials, complete the procedure above to generate a new set of SMTP credentials. Then, test the new credentials to ensure that they work as expected. Finally, delete the IAM user associated with the old SMTP credentials in the IAM console. For more information about deleting users in IAM, see Managing users in the IAM User Guide.
Obtaining Amazon SES SMTP credentials by converting existing AWS credentials
If you have an IAM user that you set up using the IAM interface, you can derive the user's Amazon SES SMTP credentials from their AWS credentials. user name SMTP_REGIONS = [ 'us-east-2', # US East (Ohio) 'us-east-1', # US East (N. Virginia) 'us-west-2', # US West (Oregon) ', #. | https://docs.aws.amazon.com/ses/latest/DeveloperGuide/smtp-credentials.html?icmpid=docs_ses_console | 2021-02-24T23:25:44 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.aws.amazon.com |
Exception Handling
Some APIs in Kii Cloud SDK throw exceptions. In this section, we explain how you can handle the exceptions and how you can investigate the cause of the failure.
- Sample code in the guides
- Types of exceptions
- Error details
- Common API errors
- Tools for investigating errors
Sample code in the guides
In the guides, we provide sample code to illustrate how you can leverage our SDKs. In the sample code, the exception handling is usually simplified; the message output and error recovery are omitted.
For example, we present the following sample code when we explain how to register new users:
Blocking API
When an error occurs in a blocking API call, an exception will be thrown. You can handle the exception with a try/catch block just like any other Java programs. The sample code presented in the guides basically handle only abstract-level exceptions.
try { user.register(password); } catch (IOException e) { // Handle the error. return; } catch (AppException e) { // Handle the error. return; }
Non-Blocking API
When an error occurs in a non-blocking API call, an exception object will be passed as a parameter of a callback method. If there is no error, this exception object will be null.
user.register(new KiiUserCallBack() { @Override public void onRegisterCompleted(int token, KiiUser user, Exception exception) { if (exception != null) { // Handle the error. return; } } }, password);
Types of exceptions
In general, your application needs to handle the following two types of exceptions (unless the sample code says otherwise).
java.lang.IOException
IOException occurs when you call an API that accesses the network, but there are some issues in the networking with the server. Most of the time, this error is due to the unstable or misconfigured network.
AppException is the root exception in the exception class hierarchy (there is CloudException that is a superclass of AppException, but this is merely for the future extension). Usually, you will just need to catch AppException, as presented in the sample code.
When an API execution causes an error, Kii Android SDK will basically throw an exception object that is a subclass of AppException. If the API internally executes the REST API, it will throw the subclass exception that corresponds to the HTTP status returned by Kii Cloud as shown in the next table (if the HTTP status not listed in the table is returned, UndefinedException will be thrown).
These exceptions have the following hierarchy, so you can handle all of them by catching IOException and AppException in a try/catch block. For the complete list of all exceptions thrown by the SDK and the detailed explanation of the exceptions thrown by each API, please read the Javadoc.
You need to validate the string (i.e. its length and the characters in the string) before you pass it to the API as a parameter. The client SDK will check the parameter and throw RuntimeException (e.g. java.lang.IllegalArgumentException) if the parameter is invalid. If you are using a blocking API, this will cause your application to crash. Please note that some non-blocking API also throws RuntimeException when the parameter is invalid. There is BadRequestException defined as a subclass of AppException, but this will not cover the parameter error cases that are handled by IllegalArgumentException.
If you want to handle exceptions more precisely, you can check and handle individual exceptions as shown in the following sample code:
Blocking API
try { user.register(password); } catch (ConflictException e) { // Registration failed because the user already exists. return; } catch (AppException e) { // Registration failed because of another exception in the app layer. return; } catch (IOException e) { // Registration failed because of an exception in the network layer. return; }
Non-Blocking API
user.register(new KiiUserCallBack() { @Override public void onRegisterCompleted(int token, KiiUser user, Exception exception) { if (exception instanceof ConflictException) { // Registration failed because the user already exists. return; } else if (exception instanceof AppException) { // Registration failed because of another exception in the app layer. return; } else if (exception instanceof IOException) { // Registration failed because of an exception in the network layer. return; } else if (exception != null) { // Registration failed because of one of the other exceptions. return; } } }, password);
It is difficult to cover all exceptions and error cases thrown by each API. We recommend you to catch only specific cases that you need to handle explicitly (e.g. user duplication and network error) and handle other cases as a general error. Please note that exceptions and error codes can be added and modified as we update the SDK and server.
Error details
Sometimes you can get the details of the exception from AppException and its subclasses. The client SDK connects to Kii Cloud server via REST API. When the server returns an error, you can get the information returned by the server using the methods in the exception. The information is useful for determining the root cause of the error during your development.
For example, suppose that you try to register a user
user_123456 but this user already exists in Kii Cloud. Executing the following code will cause a user duplication error. The sample code shows the error details you can get in this situation.
try { KiiUser user = KiiUser.builderWithName("user_123456").build(); user.register(password); } catch (ConflictException e) { // Handle the error. // Print the cause of the error. System.out.println("-- getReason"); System.out.println(e.getReason()); // Print the error message. System.out.println("-- getMessage"); System.out.println(e.getMessage()); return; } catch (AppException e) { // Handle the error. return; } catch (IOException e) { // Handle the error. return; }
-- getReason USER_ALREADY_EXISTS -- getMessage Error: null HTTP Response Status: 409 HTTP Response Body: { "errorCode" : "USER_ALREADY_EXISTS", "message" : "User with loginName user_123456 already exists", "value" : "user_123456", "field" : "loginName" }
This example shows the result of the user duplication. The error information obtainable differs by the situation. In your code, please make sure to handle the case when null is returned. For example, all REST API related details will be null if the error is raised solely by the client SDK inside).
getReasonmethod
This method returns the enum that explains the reason of failure. The enum is available when the REST API is called inside the client SDK. The method is available on the following exception classes: BadRequestException, NotFoundException, and ConflictException.
getMessagemethod
This method returns the error details for debugging purposes. The return message includes the result of the
getBodymethod, but it does not include the stack trace. If you just want to get the stack trace, you can use the
printStackTracemethod.
If you encounter an unknown error while developing, you might get the error details by reading the
messagefield of the HTTP Response BODY. In the above example, you can see that the specified user already exists in Kii Cloud.
There are more methods for getting the error details, but basically you should be able to investigate the problem with the exception type and the result of the
getReason method.
When you are going to ask for some help, we encourage you to provide this information as the reference.
For more details on errors, please check AppException class reference, including its superclass and subclasses.
If you want to control your Android code based on the error reason, we recommend coding your application as follows:
Check the exception type. For example, you can learn that the user duplication occurs when
ConflictExceptionis thrown.
If you cannot determine the error reason by the exception type (e.g. the same exception will be thrown in multiple situations), check the details with the
getReasonmethod and take the appropriate action.
Common API errors
There are common API errors for cases including when refresh of an access token fails and when the server receives too many requests.
Failed refresh of access tokens
If you are going to leverage the refresh token feature, we encourage you to design your application so as to handle the
RefreshTokenFailedException case.
As explained in this section, you need to execute relogin with a username and password when you get the
RefreshTokenFailedException exception; therefore, your application needs to show the initial screen or the login screen so as to ask for user relogin. This check needs to be made for all API calls that access the network, so it is essential to design how to handle such a situation beforehand.
Too many requests
The API returns error if there are accesses that greatly exceed the ordinary load to the server within a certain period of time for an application. This limit is set per application under an agreement with Kii.
The limit is high enough for ordinary changes in the operational load. However, if active users simultaneously send a request on the same time or event, it could cause an API error.
If the number of API calls exceeds the limit, each API returns error with the
UndefinedException class that is a subclass of the
AppException class. The
getStatus() method of the
UndefinedException instance returns error code 429. Detailed information is not available because the
UndefinedException class does not have the
getReason() method.
Usually, a mobile app processes this error as an unexpected error on the server. To avoid congestion, do not implement a retry process for executing the API.
Tools for investigating errors
There are a couple of tools available to help you investigate the error root cause.
Developer log
The error details are recorded in the developer log.
See Inspect Developer Log to learn how you can inspect the error log to spot the root cause while developing and troubleshooting.
The following log is a sample of logs that are recorded when the user duplication occurs:
2015-04-21T11:45:11.783+09:00 [ERROR] user.register description:User registration failed userID: login-name:user_123456 email-address:[email protected] phone-number:+819012345678 exception:User with loginName user_123456 already exists
Data browser
You can verify if the data updated in your application are properly set in Kii Cloud.
See Checking and Updating Data, Users, and Groups for more details. | https://docs.kii.com/en/guides/cloudsdk/android/guidelines/exception/ | 2021-02-24T23:27:37 | CC-MAIN-2021-10 | 1614178349708.2 | [array(['01.png', None], dtype=object)] | docs.kii.com |
Creating Panels
T-SBFND-005-007
When you create a new panel, it is added after the current panel.
- In the Thumbnails view, select the panel to which you want to add a.
-_12<<
-.
| https://docs.toonboom.com/help/storyboard-pro-5/storyboard/structure/create-panel.html | 2021-02-24T23:17:55 | CC-MAIN-2021-10 | 1614178349708.2 | [array(['../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object)
array(['../../Resources/Images/HAR/_Skins/Activation.png', None],
dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Resources/Images/_ICONS/download.png', None], dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Resources/Images/SBP/Steps/SBP2_15_CreatePanel_01.png',
None], dtype=object)
array(['../../Resources/Images/SBP/Steps/SBP2_15_CreatePanel_02.png',
None], dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Resources/Images/SBP/Steps/SBP2_18_CreatePanelBefore_01.png',
None], dtype=object)
array(['../../Resources/Images/SBP/Steps/SBP2_18_CreatePanelBefore_02.png',
None], dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Resources/Images/SBP/Steps/SBP2_19_SmartAddPanel_03.png',
None], dtype=object)
array(['../../Resources/Images/SBP/Steps/SBP2_19_SmartAddPanel_02.png',
None], dtype=object) ] | docs.toonboom.com |
When you create a Remote Settings key-value pair in the Unity Analytics Dashboard, the Unity Analytics Service stores that setting in the Configuration for your project that you have specified (either the Release or the Development configuration). Whenever a player starts a new session of your application, Unity makes a network request for the latest configuration from the Analytics Service. Unity considers a new session to have started when the player launches the application, or returns to an application that has been in the background for at least 30 minutes. Unity requests the Release configuration when running regular, non-development builds of your application, and requests the Development configuration when running development builds. Play mode in the Unity Editor counts as a development build.
Note: For Unity to request the Development configuration, you must build the application with Unity version 5.6.0p4+, 5.6.1p1+, 2017.1+, or Unity 5.5.3p4+, and tick the Development Build checkbox on the Build Settings window. If you build the game with an older version of Unity, Unity always requests the Release configuration.
When the network request for the Remote Settings configuration is complete, the RemoteSettings object dispatches an
Updated event to any registered event handlers, including those registered by Remote Settings components.
If the computer or device has no Internet connection and cannot communicate with the Analytics Service, Unity uses the last configuration it received and saved. The
RemoteSettings object still dispatches an
Updated event when using a saved configuration. However, if Unity has not saved the settings yet (such as when a player has no network connection the first time they run your game), then the
RemoteSettings object does not dispatch an
Updated event, and so does not update your game variables. Requesting the Remote Settings configuration over the network is an asynchronous process that might not complete before your initial Scene has finished loading, or might not complete at all, so you should always initialize your game variables to reasonable defaults.
Note: The web service from which Unity downloads the Remote Settings configuration is read-only, but is not secured. This means that the configuration could be read by third-parties. You should not put sensitive or secret information into your Remote Settings. Similarly, the saved settings file could be read and modified by end-users (although any modifications are overwritten the next time a session starts with an available Internet connection).
2017–05–30 Page published with editorial review
2017–05–30 - Service compatible with Unity 5.5 onwards at this date but version compatibility may be subject to change.
New feature in 2017.1 | https://docs.unity3d.com/ru/2017.3/Manual/UnityAnalyticsRemoteSettingsNetRequests.html | 2021-02-25T00:11:30 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.unity3d.com |
:
API_URL: This is your API endpoint, the URL of the Cloud Controller in your CFAR instance.
USERNAME: Your username.
PASSWORD: Your password. Use of the
-poption is discouraged as it may record your password in your shell history.
ORG: The org where you want to deploy your apps.
SPACE: The space in the org where you want to deploy your apps.
$, CFAR a CFAR: following example pushes an app called
my-awesome-app to the URL and specifies the Ruby buildpack with the
-b flag:
$NAME routesCreate a pull request or raise an issue on the source for this page in GitHub | https://docs.cloudfoundry.org/cf-cli/getting-started.html | 2020-01-17T21:41:53 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.cloudfoundry.org |
Deploying an existing Windows 8.1 Image to a Surface 3 or Surface Pro 3 via a ConfigMgr 2012 R2 Task Sequence Stand-alone USB media
Deploying an existing Windows 8.1 Image to a Surface 3 or Surface Pro 3 via a ConfigMgr 2012 R2 Task Sequence Stand-alone USB media
This process will walk you through taking your existing Windows 8.1 Image and deploy it to a Surface 3 or Surface Pro 3 with a ConfigMgr 2012 R2 Task Sequence Stand-alone USB media.
This process is written for Surface 3 and Surface Pro 3 devices, while it can be used for Surface Pro 1 and 2 devices as well, it cannot be used for Surface 1 and 2 devices as they are only RTS models.
Section A - Preparation
- Make sure that your Windows 8.1 Image (WIM) has been tested before being used in this process
- Make sure you have downloaded the Latest Surface 3 \ Surface Pro 3 drivers from ""
- Make sure you have at least a 16GB USB Thumb Drive
Section B - Drivers & Boot Images
Before we create the task Sequence Media, we need to first assure that everything is set right in the Boot Images for supporting the Surface Pro Devices
This article is going to take the stance of starting you out in this section at walking you through importing the Surface 3 \ Surface Pro 3 drivers into ConfigMgr 2012
(Importing Surface 3 \ Surface Pro 3 drivers)
Perform these steps on the ConfigMgr Site Server.
Assure that you have the latest Surface Pro drivers from here: "" and expanded into a network share with read permissions for everyone.
1) Open the ConfigMgr 2012 Console
2) Navigate to Navigate to Software Library \ Operating Systems \ Drivers
3) Right click Drivers and select Folder \ Create Folder, name the folder, example: Surface_Drivers
4) Right click Drivers and select Import Driver
5) On the Specify a location to import driver,
- Source folder click Browse and enter the UNC path to the share with the Surface Drivers in it, then click OK
- Assure "Specify the option for duplicate drivers" is set to "Import the driver and append a new category to the existing categories", click Next
6) On the Specify the details for the imported driver,
- Review the list of drivers to be imported, assure "Enable these drivers and allow computers to install them" is checked
- Click the Categories... button
- Click the Create... button
- Set the Name, example: Surface_Drivers
- Then click OK twice and then Next
7) On the Select the packages to add the imported driver,
- Click New Package...
- Set the Name, example: Surface_Drivers
- Set the Path UNC path where you want to store the drivers, click OK
- Assure "Update distribution points when finished" is checked, click Next
8) On the Select drivers to include in the boot image,
- Assure all check boxes are clear (we will add them to boot images later), click Next
9) On the Wizard will import the following drivers, click Next
10) On the Import New Driver Wizard completed successfully, click Close
11) In the ConfigMgr 2012 Console, Navigate to Software Library \ Operating Systems \ Driver Packages
- Right click the package you just created, select Distribute Content
- Add the Distribution Points \ Distribution Groups
- Verify that the distribution was successful
(Prepairing the Boot Image)
Perform these steps on the ConfigMgr Site Server.
These steps assume that you are already using the latest ADK (Example ADK 8.1 at the time of writing) boot images imported into your ConfigMgr 2012 R2 environment.
These steps are only going to be touching the Boot Image (x64) and not the Boot Image (x86) as you will want to use the Boot Image (x64) with the Surface devices.
Open the ConfigMgr 2012 Console
1) Navigate to Navigate to Software Library \ Operating Systems \ Boot Images
2) Select the "Boot Image (x64)", right click and select "Properties"
- Select the "Customization" tab, check "Enable command support (testing only)"
- Select the "Drivers" tab:
- Click the yellow star to add drivers
- Add the Surface Ethernet Adapter drivers, click Ok
- Click OK to update the boot image
3) Assure the process completes with zero failures
4) Wait until all Distribution Points are updated
Section C - Task Sequence
Before we create the task Sequence Media, we need to first create a new Task Sequence, this article will walk you through creating a base Task Sequence.
This Task Sequence will be set to "Join a workgroup" instead of "Join a domain" as this is a Task Sequence Stand-alone USB media and not assuming you have network access.
You can edit to add Applications, Software Updates, Etc.. at a later time to customize the Task Sequence further.
(Create the Task Sequence)
1) Open the ConfigMgr 2012 Console
2) Navigate to Software Library \ Operating Systems \ Operating System Images \ Task Sequences
3) Right click Task Sequences, select Create Task Sequence
- Select Install an existing image package, click Next
- Set the Name (Example: Surface Deployment)
- Click Browse to select the Boot Image (x64) you added the Surface Drivers to, click Ok and then Next
- Click Browse to select the Image package, select your existing Windows 8.1 Image, click OK
- Uncheck the "Configure task sequence for use with Bitlocker" box
- If not using a KMS Server, enter the Product Key and select the Service licensing mode
- Select "Enable the account and specify the local Administrator password" then set the password, click Next
- Select "Join a workgroup" and specify a workgroup name, example: "Surface Devices", click Next
- Leave the ConfigMgr client Installation properties blank for now, click Next
- Uncheck the "Capture user settings and files" box, click Next
- Select "Do not install any software updates", click Next
- Leave the Applications default, click Next
- Confirm the settings, click Next
- When completed creation of the task sequence, click Close
(Edit the Task Sequence)
Open the ConfigMgr 2012 Console
2) Navigate to Software Library \ Operating Systems \ Operating System Images \ Task Sequences
3) Right click the Task Sequence you just created, select Edit
- Select "Partition Disk 0 - UEFI", double click the "Windows (Primary)" partition in the list on right
- Set the "Variable" at the bottom of the Partition Properties to OSDISK, click OK
- Select "Apply Operating System", Set Destination to "Logical drive letter stored in a variable"
- Set the "Variable name" to OSDisk
- If you are using an unattended or Sysprep answer file with your existing Windows 8.1 image package
- Check the "Use an unattended or Sysprep answer file for a customer installation" box
- Click Browse, select the package and click OK
- Enter the file name
- Select "Apply Device Drivers", select "Limit driver matching to only consider drivers in selected categories:"
- Place a check mark next to the Surface drivers you created earlier
- Check the "Do unattended installation of unsigned drivers on versions of Windows where this is allowed" box
- Click Ok to apply and close the task sequence changes
Section D - Create the Task Sequence Media
Perform these steps on a ConfigMgr 2012 Remote Console, (Example your workstation), this is because most people do not have access to plug a thumb drive into the server. You will need to plug your USB thumb drive into your machine at this time.
(Create the Task Sequence Media)
1) Open the ConfigMgr 2012 Console
2) Navigate to Navigate to Software Library \ Operating Systems \ Task Sequences
3) Right Click Task Sequences, select Create Task Sequence Media
- Select "Stand-alone media", click Next
- Select "USB flash drive" and select the Drive for the USB thumb drive, click Next
- If you want to "Specify a password to protect task sequence media" enter the password or uncheck the box, click Next
- Click the Browse button to select the "Task sequence" you created earlier, click Ok
- Check the "Detect associated application dependencies and add them to this media" box, click Next
- Select the "Distribution Point" in the top list you wish to use, click the Add button and then Next
- Leave the "Customization" page default, click Next
- Confirm the settings, click Next
- When completed, click Close
Section E - Summary
You now have a ConfigMgr 2012 R2 Stand-along media that will deploy your existing Windows 8.1 Image to Surface 3 or Surface 3 Pro devices.
A reminder that this process can be also used for Surface Pro 1 and 2 devices as well but not Surface 1 or 2 devices as they are RTS models.
Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of any included script samples are subject to the terms specified in the Terms of Use:
Have a question about content? Join us on Yammer | https://docs.microsoft.com/en-us/archive/blogs/charlesa_us/deploying-an-existing-windows-8-1-image-to-a-surface-3-or-surface-pro-3-via-a-configmgr-2012-r2-task-sequence-stand-alone-usb-media | 2020-01-17T23:22:57 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.microsoft.com |
The customer portal is a consumer-facing front-end catalog for product discovery, purchase, and deployment.
The customer portal is localized and uses browser language settings to choose the language.
Catalog
Also known as a "marketplace," the catalog displays products and solutions that are available for purchase or to Test Drive. Test drives provide a way to try the product before purchasing.
When you visit the catalog, you see a tile layout of products that are available to you. You can browse the products by simple paging, by clicking the category links in the category bar, or by searching to narrow down the results.
Product details
When you click a product, you are taken to the product details page, containing a description, features, images, videos, FAQs, and testimonials.
From here, you can choose to purchase the product or, if a Test Drive is available, try out the product before purchasing it.
Tools and settings
When you sign in to the catalog, a top navigation bar appears. Typical links included are:
- A configurable dashboard
- Reports
- Subscriptions
- Invoices
- Settings
For more information, see Customer tools and settings. | https://docs.orbitera.com/guides/marketplace/customer-portal-overview | 2020-01-17T22:45:22 | CC-MAIN-2020-05 | 1579250591234.15 | [array(['/images/screen-marketplace.png', 'Customer marketplace'],
dtype=object)
array(['/images/screen-customer-nav.png', 'Customer nav bar'],
dtype=object) ] | docs.orbitera.com |
This is a segmentation network to classify each pixel into 20 classes:
The quality metrics calculated on 2000 images:
IOU=TP/(TP+FN+FP), where:
TP- number of true positive pixels for given class
FN- number of false negative pixels for given class
FP- number of false positive pixels for given class
The blob with BGR image in format: [B, C=3, H=1024, W=2048], where:
[*] Other names and brands may be claimed as the property of others. | http://docs.openvinotoolkit.org/latest/_models_intel_semantic_segmentation_adas_0001_description_semantic_segmentation_adas_0001.html | 2020-01-17T21:00:26 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.openvinotoolkit.org |
Configuration¶
Datasette provides a number of configuration options. These can be set using the
--config name:value option to
datasette serve.
You can set multiple configuration options at once like this:
datasette mydatabase.db --config default_page_size:50 \ --config sql_time_limit_ms:3500 \ --config max_returned_rows:2000
To prevent rogue, long-running queries from making a Datasette instance inaccessible to other users, Datasette imposes some limits on the SQL that you can execute. These are exposed as config options which you can over-ride.
default_page_size¶
The default number of rows returned by the table page. You can over-ride this on a per-page basis using the
?_size=80 querystring parameter, provided you do not specify a value higher than the
max_returned_rows setting. You can set this default using
--config like so:
datasette mydatabase.db --config default_page_size:50
sql_time_limit_ms¶
By default, queries have a time limit of one second. If a query takes longer than this to run Datasette will terminate the query and return an error.
If this time limit is too short for you, you can customize it using the
sql_time_limit_ms limit - for example, to increase it to 3.5 seconds:
datasette mydatabase.db --config sql_time_limit_ms:3500
You can optionally set a lower time limit for an individual query using the
?_timelimit=100 querystring argument:
/my-database/my-table?qSpecies=44&_timelimit=100
This would set the time limit to 100ms for that specific query. This feature is useful if you are working with databases of unknown size and complexity - a query that might make perfect sense for a smaller table could take too long to execute on a table with millions of rows. By setting custom time limits you can execute queries “optimistically” - e.g. give me an exact count of rows matching this query but only if it takes less than 100ms to calculate.
max_returned_rows¶
Datasette returns a maximum of 1,000 rows of data at a time. If you execute a query that returns more than 1,000 rows, Datasette will return the first 1,000 and include a warning that the result set has been truncated. You can use OFFSET/LIMIT or other methods in your SQL to implement pagination if you need to return more than 1,000 rows.
You can increase or decrease this limit like so:
datasette mydatabase.db --config max_returned_rows:2000
num_sql_threads¶
Maximum number of threads in the thread pool Datasette uses to execute SQLite queries. Defaults to 3.
datasette mydatabase.db --config num_sql_threads:10
allow_facet¶
Allow users to specify columns they would like to facet on using the
?_facet=COLNAME URL parameter to the table view.
This is enabled by default. If disabled, facets will still be displayed if they have been specifically enabled in
metadata.json configuration for the table.
Here’s how to disable this feature:
datasette mydatabase.db --config allow_facet:off
default_facet_size¶
The default number of unique rows returned by Facets is 30. You can customize it like this:
datasette mydatabase.db --config default_facet_size:50
facet_time_limit_ms¶
This is the time limit Datasette allows for calculating a facet, which defaults to 200ms:
datasette mydatabase.db --config facet_time_limit_ms:1000
facet_suggest_time_limit_ms¶
When Datasette calculates suggested facets it needs to run a SQL query for every column in your table. The default for this time limit is 50ms to account for the fact that it needs to run once for every column. If the time limit is exceeded the column will not be suggested as a facet.
You can increase this time limit like so:
datasette mydatabase.db --config facet_suggest_time_limit_ms:500
suggest_facets¶
Should Datasette calculate suggested facets? On by default, turn this off like so:
datasette mydatabase.db --config suggest_facets:off
allow_download¶
Should users be able to download the original SQLite database using a link on the database index page? This is turned on by default - to disable database downloads, use the following:
datasette mydatabase.db --config allow_download:off
allow_sql¶
Enable/disable the ability for users to run custom SQL directly against a database. To disable this feature, run:
datasette mydatabase.db --config allow_sql:off
default_cache_ttl¶
Default HTTP caching max-age header in seconds, used for
Cache-Control: max-age=X. Can be over-ridden on a per-request basis using the
?_ttl= querystring parameter. Set this to
0 to disable HTTP caching entirely. Defaults to 5 seconds.
datasette mydatabase.db --config default_cache_ttl:60
default_cache_ttl_hashed¶
Default HTTP caching max-age for responses served using using the hashed-urls mechanism. Defaults to 365 days (31536000 seconds).
datasette mydatabase.db --config default_cache_ttl_hashed:10000
cache_size_kb¶
Sets the amount of memory SQLite uses for its per-connection cache, in KB.
datasette mydatabase.db --config cache_size_kb:5000
allow_csv_stream¶
Enables the CSV export feature where an entire table (potentially hundreds of thousands of rows) can be exported as a single CSV file. This is turned on by default - you can turn it off like this:
datasette mydatabase.db --config allow_csv_stream:off
max_csv_mb¶
The maximum size of CSV that can be exported, in megabytes. Defaults to 100MB. You can disable the limit entirely by settings this to 0:
datasette mydatabase.db --config max_csv_mb:0
truncate_cells_html¶
In the HTML table view, truncate any strings that are longer than this value. The full value will still be available in CSV, JSON and on the individual row HTML page. Set this to 0 to disable truncation.
datasette mydatabase.db --config truncate_cells_html:0
force_https_urls¶
Forces self-referential URLs in the JSON output to always use the
https://
protocol. This is useful for cases where the application itself is hosted using
HTTP but is served to the outside world via a proxy that enables HTTPS.
datasette mydatabase.db --config force_https_urls:1
hash_urls¶
When enabled, this setting causes Datasette to append a content hash of the database file to the URL path for every table and query within that database.
When combined with far-future expire headers this ensures that queries can be cached forever, safe in the knowledge that any modifications to the database itself will result in new, uncachcacheed URL paths.
datasette mydatabase.db --config hash_urls:1
template_debug¶
This setting enables template context debug mode, which is useful to help understand what variables are available to custom templates when you are writing them.
Enable it like this:
datasette mydatabase.db --config template_debug:1
Now you can add
?_context=1 or
&_context=1 to any Datasette page to see the context that was passed to that template.
Some examples: | https://datasette.readthedocs.io/en/latest/config.html | 2020-01-17T22:19:07 | CC-MAIN-2020-05 | 1579250591234.15 | [] | datasette.readthedocs.io |
Building a development environment with Docker Compose¶
To follow this how-to you need to have Docker and Compose installed in your machine.
First clone the tsuru project from GitHub:
$ git clone
Enter the
tsuru directory and execute
build-compose.sh. It will
take some time:
$ cd tsuru $ ./build-compose.sh
At the first time you run is possible that api and planb fails, just run
docker-compose up -d to fix it.
$ docker-compose up -d
Now you have tsuru dependencies, tsuru api and one docker node running in your machine. You can check
running
docker-compose ps:
$ docker-compose ps
You have a fresh tsuru installed, so you need to create the admin user running tsurud inside container.
$ docker-compose exec api tsurud root-user-create [email protected]
Then configure the tsuru target:
$ tsuru target-add development -s
You need to create one pool of nodes and add node1 as a tsuru node.
$ tsuru pool-add development -p -d $ tsuru node-add --register address= pool=development
Everytime you change tsuru and want to test you need to run
build-compose.sh to build tsurud, generate and run the new api.
If you want to use gandalf, generate one app token and insert into docker-compose.yml file in gandalf environment TSURU_TOKEN.
$ docker-compose stop api $ docker-compose run --entrypoint="/bin/sh -c" api "tsurud token" // insert token into docker-compose.yml $ docker-compose up -d
Kubernetes Integration¶
One can register a minikube instance as a cluster in tsuru to be able to orchestrate tsuru applications on minikube.
Start minikube:
$ minikube start --insecure-registry=10.0.0.0/8
Create a pool in tsuru to be managed by the cluster:
$ tsuru pool add kubepool --provisioner kubernetes
Register your minikube as a tsuru cluster:
$ tsuru cluster add minikube kubernetes --addr https://`minikube ip`:8443 --cacert $HOME/.minikube/ca.crt --clientcert $HOME/.minikube/apiserver.crt --clientkey $HOME/.minikube/apiserver.key --pool kubepool
Check your node IP:
$ tsuru node list -f tsuru.io/cluster=minikube
Add this IP address as a member of kubepool:
$ tsuru node update <node ip> pool=kubepool
You are ready to create and deploy apps kubernetes. | https://docs.tsuru.io/stable/contributing/compose.html | 2020-01-17T22:22:37 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.tsuru.io |
!
CAMP REQUIREMENTS: Please bring your laptop with the following minimum requirements: | https://docs.microsoft.com/en-us/archive/blogs/yungchou/technet-events-presents-it-camp-the-future-of-it | 2020-01-17T23:18:52 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.microsoft.com |
@ExplainProc — Returns the execution plans for all SQL queries in the specified stored procedure.
@ExplainProc String procedure-name
The @ExplainProc system procedure returns the execution plans for all of the SQL queries within the specified stored procedure. Execution, or explain, plans describe how VoltDB expects to execute the queries at runtime, including what indexes are used, the order the tables are joined, and so on. Execution plans are useful for identifying performance issues in query and stored procedure design. See the chapter on execution plans in the VoltDB Guide to Performance and Customization for information on how to interpret the plans.
The following example uses @ExplainProc to evaluate the execution plans associated with the ContestantWinningStates stored procedure in the voter sample application.
try { VoltTable[] results = client.callProcedure("@ExplainProc", "ContestantWinningStates" ).getResults(); results[0].resetRowPosition(); while (results[0].advanceRow()) { System.out.printf("Query: %d\nPlan:\n%d", results[0].getString(0),results[0].getString(1)); } } catch (Exception e) { e.printStackTrace(); }
In the sqlcmd utility, the "explainproc" command is a shortcut for "exec @ExplainProc". So the following two commands are equivalent:
$ sqlcmd 1> exec @ExplainProc 'ContestantWinningStates'; 2> explainproc ContestantWinningStates; | https://docs.voltdb.com/UsingVoltDB/sysprocexplainproc.php | 2020-01-17T21:18:09 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.voltdb.com |
Debian and Ubuntu Linux (18.04):
deb bionic main deb-src bionic main
Ubuntu Cosmic Cuttlefish (18.10)
deb cosmic main deb-src cosmic main
Ubuntu Disco Dingo (19.04):
Not yet available
Ubuntu Eoan Ermine (19.10)
Not yet available
Debian stable (Strech 9):
deb stretch main deb-src stretch main
Debian stable (Buster 10):
Not yet available
Debian Unstable:
deb unstable main deb-src unstable main
To install or upgrade a software package:
sudo apt-get update sudo apt-get install package_name
Replace package_name with the name of the software package.
Tar Archives
Some packages are available as tar archives:
Version Control Repositories
All python software packages can be installed system-wide using:
sudo python setup.py install
Debian Package Building. | https://docs-new.sipthor.net/w/debian_package_repositories/ | 2020-01-17T23:01:32 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs-new.sipthor.net |
Cart API Name: Cart__c Define editable fields at the shopping cart by product type. Label API Name Type Description Name Name Text(255) Fieldname FieldName__c Text(255) The name of the Item field. The name of the custom label. Read only ReadOnly__c Checkbox The field will be rendered as text and is not editable. Do not use for required fields. Required Required__c Checkbox Mark a input as required at the profile page. Fields which are required at the database level are marked as required, ignoring this setting. Sequence Sequence__c Number(4, 0) The components are ordered by sequence at the shopping cart. Type Type__c Text(255) The product type for which this field should be shown. If the field is empty, it will be used for all products. | https://docs.juston.com/sse_objects/Cart/ | 2020-01-17T22:13:31 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.juston.com |
API Gateway
The Ethereum developer community is recognized as the largest and most vibrant Blockchain ecosystem in the world today. There are a huge number of tools for DApp developers to engineer high quality smart contracts and application experiences, and you will find instructions on how to use many of those tools with Kaleido in our marketplace.
Even with all of these tools in place, and the straightforward nature of the Ethereum to DApp interface compared to other technologies in the Blockchain space, the amount of code and experience needed to connect application logic reliably with a Blockchain network can be significant. Thick libraries such as web3.js, web3j and Nethereum are often embedded inside the application to handle RLP encoding of transactions, and the complexities of the JSON/RPC native protocol of Ethereum. Wallet and key management, signing and nonce management (which we discuss in detail later) must be handled in the application in a way that fits with the scaling characteristics.
In an Enterprise project the Blockchain specialized DApp developers comfortable with these complexities are usually a small subset of the engineers that need to integrate and connect applications. Core system integration unique to each participant is required to allow projects to have meaningful impact on business processes, and data often needs to exchanged with the chain by various systems.
Many of these systems are sensitive to change, and Enterprise Application Integration tools and Enterprise Service Bus (ESB) architectures are used to integrate with them, rather than making direct changes to the applications themselves.
So in practice, the Blockchain interface used by participants to connect their systems to the ledger must be via standards based Web APIs and Messaging constructs that are consumable and reliable for multiple Enterprises in the business network. It is then extremely common to use this same layer to abstract the business application logic, such as Microservices forming a business application and user experience, from the blockchain via this same simple to consume API.
Kaleido provides this API Gateway layer out of the box. Using modern REST APIs with OpenAPI (Swagger) documentation, and backed by an infrastructure for Key management, signing, event steam management and high volume reliable transaction submission. The Gateway also supports direct Kafka connectivity for direct streaming of transactions from core systems and event-driven ESB technologies. | https://docs.kaleido.io/kaleido-platform/full-stack/api-gateway/ | 2020-01-17T21:03:23 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.kaleido.io |
Developer installation guide
These instructions outline how to install Opencast on a Linux system. Currently guided flavors are Fedora 30, Ubuntu 18.04 and Mac OS 10.14.
TL;DR
$ git clone $ cd opencast && mvn clean install -Pdev $ cd build/opencast-dist-develop-*/bin && ./start-opencast
You can find the default Admin UI at: localhost
Default credentials are:
- username: admin
- password: opencast
Configuring Git (optional)
$ git config --global user.name "Example Name" $ git config --global user.email [email protected] $ ssh-keygen -t rsa -b 4096 -C "[email protected]" $ cat ~/.ssh/id_rsa.pub
Go to: Github, click "New SSH Key" and paste your content of id_rsa.pub into the input field. It should look like:
ssh-rsawiodajsiodjaaosdiasdjsaddioasjosij== [email protected]
Now press "Add SSH Key" and return to your terminal and:
$ ssh -T [email protected] $ yes <enter>
Clone Opencast
You can get the Opencast source code by cloning the Git repository.
Cloning the Git repository:
$ git clone
Install Dependencies
General Information
Please make sure to install the following dependencies.
Required:
java-1.8.0-openjdk-devel.x86_64 / openjdk-8-jdk (other jdk versions untested / Oracle JDK strongly not recommended) ffmpeg >= 3.2.4 maven >= 3.1 python >= 2.6, < 3.0 unzip gcc-c++ tar bzip2
Required as a service for running Opencast:
ActiveMQ >= 5.10 (older versions untested)
Required for some services. Some tests may be skipped and some features may not be usable if they are not installed. Hence, it's generally a good idea to install them.
tesseract >= 3 hunspell >= 1.2.8 sox >= 14.4 synfig
Ubuntu 18.04
Update System
$ sudo apt update && sudo apt upgrade -y
Install Packages via APT
$ sudo apt install -y git openjdk-8-jdk maven gcc g++ build-essential cmake curl sox hunspell synfig ffmpeg
Install NodeJS (optional)
$ curl -sL | sudo -E bash - $ sudo apt-get install -y nodejs $ sudo npm install -g eslint
Install Chrome
$ cd && cd Downloads && wget && sudo dpkg -i google-chrome-stable_current_amd64.deb
Set System Java JDK
Choose the Java Version 1.8.0 by entering:
$ sudo update-alternatives --config java
Set the JAVA_HOME Variable
Open your .bashrc
$ cd && nano .bashrc
and paste following content at the end of the file:
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
Fedora 30
Update System
$ sudo dnf update -y
Install Packages via DNF and RPM Fusion
$ sudo dnf install(rpm -E %fedora).noarch.rpm(rpm -E %fedora).noarch.rpm -y $ sudo dnf group install 'Development Tools' -y && $ sudo dnf install -y java-1.8.0-openjdk ffmpeg maven tesseract hunspell sox synfig unzip gcc-c++ tar bzip2
Install NodeJS (optional)
$ sudo dnf install -y nodejs $ sudo npm install -g eslint
Install Chrome
$ cd && cd Downloads && wget && sudo dnf install google-chrome-stable_current_x86_64.rpm -y
Mac OS 10.14
Update System
Try to install all updates via the App Store or the Apple Icon on the top left corner.
Java JDK 8
Install the JDK 8 by downloading it from
XCode
Install XCode over the App Store. It will be needed for building and for git.
Install Packages via Homebrew
The Homebrew Project adds a package manager to Mac OS. You can install it by:
$ /usr/bin/ruby -e "$(curl -fsSL)"
You can now install needed packages:
$ brew install maven ffmpeg
Install NodeJS (optional)
$ brew install nodejs $ sudo npm install -g eslint
ActiveMQ with Homebrew
Homebrew offers you an ActiveMQ Package. Please decide, if you want to use Homebrew for ActiveMQ or if you want to follow the general guide below and run it by downloading the binaries. If you want continue you can install ActiveMQ by
$ brew install activemq
Remember to copy the activemq.xml like mentioned below in the right directory. You have to find the right folder for that operation, Homebrew will put the ActiveMQ files in a different location. You could find it by
$ sudo find / -name activemq.xml
After changing the configuration file you can list and start or stop you services with
$ brew services list $ brew services start activemq
Git Bash Completion
In Mac OS you can not complete or suggest half typed commands with your Tab Key (like you probably know from linux). If you want to use bash completion, you have to install it by
$ brew install bash-completion
Find the location of the configuration file
$ sudo find / -type f -name "git-completion.bash"
Normally it should be in
$ cp /Library/Developer/CommandLineTools/usr/share/git-core/git-completion.bash /usr/local/etc/bash_completion.d/
Then add following line to the bash_profile in home
[ -f /usr/local/etc/bash_completion ] && . /usr/local/etc/bash_completion
Finally apply your changes with
$ source /usr/local/etc/bash_completion.d/git-completion.bash
Install and Configure ActiveMQ
Download the current version from
Extract and copy it to a directory, in this case you could use the opt directory.
$ sudo tar -zxvf apache-activemq-*-bin.tar.gz -C /opt $ cd /opt && sudo mv apache-activemq-*/ activemq
Copy the preconfigured XML from your opencast directory into your ActiveMQ configuration. In this example you have following folder structure:
- ~/Projects/opencast
- /opt/activemq
With that folder structure you could use following command:
$ cd && cd Projects && sudo cp opencast/docs/scripts/activemq/activemq.xml /opt/activemq/conf/activemq.xml
If your folder structure is different from that example or you do decide to put it somewhere else, you should copy and replace the preconfigured XML from
- /location/to/your/opencast/docs/scripts/activemq/activemq.xml
into
- /location/to/your/activemq/conf/activemq.xml
You can start your ActiveMQ instance with:
$ sudo ./location/to/your/activemq/bin/activemq start
Build and Start Opencast
You can build now opencast by changing your directory into your opencast location and by running:
$ mvn clean install
After the successfully compilation you can start opencast with:
$ cd build/opencast-dist-develop-*/bin && ./start-opencast
The
-Pdev argument decreases the build time and skips the creation of multiple tarballs and turning on the developer tarball.
$ cd opencast && mvn clean install -Pdev $ cd build/opencast-dist-develop-*/bin && ./start-opencast
For further information visit Development Environment.
Useful Commands for Testing Purposes
For a quick build, you can use the following command to skip Opencast's tests.
$ cd opencast $ mvn clean install -Pdev -DskipTests=true
To see the whole stacktrace of the installation you can use the following command to disable the trimming.
$ cd opencast $ mvn clean install -DtrimStackTrace=false
If you want to start opencast in debug mode, you could use the debug argument:
$ cd build/opencast-dist-develop-*/bin && ./start-opencast debug
Modify Code and Build Changes
After you modified your code you can go back to step "Build and Start Opencast" to rebuild Opencast.
Common Build Errors or Fixes
NPM Access Error
To fix an npm access error (example), you can run
$ sudo chown -R $USER:$(id -gn $USER) ~/.config && sudo chown -R $USER:$(id -gn $USER) ~/.npm
JDK Version
Some IDEs attempt to use the most recent version of the JDK. Make sure that your IDE is configured to use JDK 1.8.0.
Waiting for ActiveMQ
Opencast requires ActiveMQ to be both running and properly configured, otherwise it will wait forever to connect. See here for details on how to configure ActiveMQ. Make sure, that ActiveMQ runs without errors and with the right JAVA_HOME Variable (explained here).
Slow Idea Fix
Edit following file
$ sudo nano /etc/sysctl.conf
and copy this text into it
fs.inotify.max_user_watches = 524288
Apply your changes with
$ sudo sysctl -p --system
Intellij Idea IDE Community Edition (optional)
If you are currently on Fedora, you can install it with following command. Make sure, that the versions match, you probably have to change it depending on the most current version.
$ cd && cd Downloads && wget $ sudo tar -zxvf ideaIC-*.tar.gz -C /opt $ cd /opt && sudo mv idea-IC-*/ idea && sh /opt/idea/bin/idea.sh
Otherwise install it by downloading and following the manufacturer guide, select Community Edition:
IDEA Intellij Community Edition
Follow the next steps, if you want to import opencast correctly
- Import project from external model
- Choose Maven
- Uncheck all listed profiles
- Check all projects to import
- Make sure not to select JDK 11, please select JDK 1.8.0, it should be somewhere around /usr/lib/jvm/java-1.8.0-openjdk depending on your current system
Now Idea should import the projects, it could take some time, you can make it faster by following this.
Import the opencast code style configuration by following the steps
- Go to settings
- You should find it under Editor->Code Style
- Select Java and click on the gear icon
- Select Import Scheme and Intellij IDEA code style XML
- Import it from opencast/docs/intellij_settings/codestyle.xml
Now your IDE should be ready for developing.
Visual Studio Code Editor (optional)
If you are currently on Fedora, you can install it with
$ cd && cd Downloads && sudo rpm --import && sudo sh -c 'echo -e "[code]\nname=Visual Studio Code\nbaseurl=\nenabled=1\ngpgcheck=1\ngpgkey=" > /etc/yum.repos.d/vscode.repo' && dnf check-update && sudo dnf install code -y
Otherwise install it by downloading and following the manufacturer guide:
After installation you can open a folder in bash with
$ code .
Recommended Extensions are
- ESLint by Dirk Baeumer
- AngularJs 1.x Code Snippets by alexandersage
- Debugger for Chrome by Microsoft | https://docs.opencast.org/develop/developer/installation/source-linux/ | 2020-01-17T22:41:56 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.opencast.org |
You can add or remove the desired performance rating scale in the questionnaire designer.
Step 1: Open the rating scale editor
Open the questionnaire designer for your project, and then click to edit the rating scale.
Step 2: Update the questionnaire to add or remove the desired performance scale
You can choose whether or to include the desired performance as shown below.
If you do include it, you can also customize the labels used for the current and desired performance rating scales.
Not yet decided if you want to include the desired rating scale?
Find out when to ask for feedback on both current and desired performance. | https://docs.spidergap.com/en/articles/2646597-how-to-add-or-remove-the-desired-performance-rating-scale | 2020-01-17T21:05:29 | CC-MAIN-2020-05 | 1579250591234.15 | [array(['https://downloads.intercomcdn.com/i/o/97464573/a849bd7fec561b55096b7372/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/97464679/e5ccdd8ceb5a415a20fd46c8/image.png',
None], dtype=object) ] | docs.spidergap.com |
Product Index
Choose the best face for each situation and express what you want. “He is Grumpy” contains 40 detailed and realistic expressions for Ollie 8 and Genesis 8 Male. | http://docs.daz3d.com/doku.php/public/read_me/index/51249/start | 2020-01-17T22:09:01 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.daz3d.com |
Kubernetes and OpenShift Guide¶
Modules for interacting with the Kubernetes (K8s) and OpenShift API are under development, and can be used in preview mode. To use them, review the requirements, and then follow the installation and use instructions.
Requirements¶
To use the modules, you’ll need the following:
- Run Ansible from source. For assistance, view running from source
- OpenShift Rest Client installed on the host that will execute the modules
Installation and use¶
The individual modules, as of this writing, are not part of the Ansible repository, but they can be accessed by installing the role, ansible.kubernetes-modules, and including it in a playbook.
To install, run the following:
$ ansible-galaxy install ansible.kubernetes-modules
Next, include it in a playbook, as follows:
--- - hosts: localhost remote_user: root roles: - role: ansible.kubernetes-modules - role: hello-world
Because the role is referenced,
hello-world is able to access the modules, and use them to deploy an application.
The modules are found in the
library folder of the role. Each includes full documentation for parameters and the returned data structure. However, not all modules include examples, only those where testing data has been created.
Authenticating with the API¶
By default the OpenShift Rest Client will look for
~/.kube/config, and if found, connect using the active context. You can override the location of the file using the``kubeconfig`` parameter, and the context, using the
context parameter.
Basic authentication is also supported using the
username and
password options. You can override the URL using the
host parameter. Certificate authentication works through the
ssl_ca_cert,
cert_file, and
key_file parameters, and for token authentication, use the
api_key parameter.
To disable SSL certificate verification, set
verify_ssl to false.
Filing issues¶
If you find a bug or have a suggestion regarding individual modules or the role, please file issues at OpenShift Rest Client issues.
There is also a utility module, k8s_common.py, that is part of the Ansible repo. If you find a bug or have suggestions regarding it, please file issues at Ansible issues. | https://docs.ansible.com/ansible/latest/scenario_guides/guide_kubernetes.html | 2020-01-17T22:19:59 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.ansible.com |
is now set to
truebyand
updateOptOut
- Removed GET /series/{id}/optOut
-}
Additional Notes About 7.1
Opencast 7.1 is the first maintenance release for Opencast 7. It fixes a bug with the scheduler migration which may have caused minor issues for old, processed events which were missing some metadata. If you have already migrated to Opencast 7.0 and experience this problem, simply re-start the scheduler migration and re-build the index once more.
Additional Notes About 7.2
Opencast 7.2 fixes a bug in the video editor configuration present in Opencast 7.0 to 7.1 which will cause Opencast to always silently skip the video editor and publish the whole video. The problem was introduced by a fix in the default workflows and later fixed again by a configuration change therein . If you use the default workflows, please make sure to update to the latest state of the workflows.
If you use your own workflow and did not adapt the first patch, you should not be affected by this problem at all. If you are, just make sure that source and target smil flavor for the editor workflow operation are identical like it is ensured by the official fix. A proper solution not relying on specific configurations and less error prone is in work and will be added to the upcoming major Opencast release.
Additional Notes About 7.3
Opencast 7.3 fixes a bug where the audio-source-name parameter of the composite operation defaults to none instead of "dual" in distributed systems, causing the resulting video to have no audio. A workaround for this problem entails setting audio-source-name to "dual" explicitly. Additionally this release fixes another problem in the composite WOH where using the watermark tool causes the operation to fail.
This release also solves a major problem during the index rebuild of the workflow service where the process will sometimes fail with an OutOfMemory error because it's attempting to load all workflows at once. This problem is averted by loading and processing the workflows in increments instead. If you encountered this problem or plan to update to 7 and have a lot of workflows in your system, you should update to 7.3.
7.3 also ensures that the start dates shown in the events table and in the event details are once more consistent by using the bibliographic for both instead of showing the technical date in the events table since this can't be updated via UI for uploaded events and leads to confusion.
Last but not least this release also fixes a known security vulnerability in Apache Santuario, so it is encouraged to update.
Aditional Notes About 7.4
Opencast 7.4 brings a performance fix for some queries in the search API that can cause Opencast to run out of memory.
This release also gives back a patch that was in version 6.x that allows to filter capture agent roles for ACLs and fixes the date cell of the events overview table in the admin UI.
Finally, Opencast 7.4 also brings an update to the CAS security example that was out of date.
Aditional Notes About 7.5
Opencast 7.5 fixes behaviour where the bibliographic date was not changed when changing the technical date via Rest. Also an option was added to disable thumbnail generation in the video editor because it can lead to performance issues.
Release Schedule
Release Managers
- Maximiliano Lira Del Canto (University of Cologne)
- Katrin Ihler (ELAN e.V.) | https://docs.opencast.org/r/7.x/admin/releasenotes/ | 2020-01-17T21:32:17 | CC-MAIN-2020-05 | 1579250591234.15 | [] | docs.opencast.org |
Difference between revisions of "Components Banners Tracks"
From Joomla! Documentation
Revision as of 18:10, 21 March
Components Help Screens
- Components Banners Banners
- Components Banners Banners Edit
- Components Banners Categories
- Components Banners Categories Edit
- Components Banners Clients
- Components Banners Clients. The name of the Track.
- Client. The name of the Banner Client. You may click on the name to open the Client for editing.
- Category.
- Type.
- Count.
- Date.
List Filters
Filter by Begin and End Date
In the upper right area, above the column headings, there are two text boxes as shown below:
Filter by Client, Category and Type
In the upper right area, above the column headings, there are three drop-down list boxes as shown below:
The selections may be combined. Only items matching all selections will display in the list.
- right you will see the toolbar:
The functions are:
- Export. Export banner tracking information in a CSV file. Options to name the file and compress it after the button is clicked.
- Delete. Deletes the selected tracks. Works with one or multiple tracks selected.
- | https://docs.joomla.org/index.php?title=Help25:Components_Banners_Tracks&diff=83486&oldid=82062 | 2015-04-18T07:50:02 | CC-MAIN-2015-18 | 1429246633972.52 | [array(['/images/2/27/Help25-banners-manage-tracks-filter-attributes-beginning-end.png',
'Help25-banners-manage-tracks-filter-attributes-beginning-end.png'],
dtype=object) ] | docs.joomla.org |
Difference between revisions of "Section"
From Joomla! Documentation
Revision as of 08:19, 24 June 2013
This Namespace has been archived - Please Do Not Edit or Create Pages in this namespace. Pages contain information for a Joomla! version which is no longer supported. It exists only as a historical reference, will not be improved and its content may be incomplete.
<translate></translate> | https://docs.joomla.org/index.php?title=J1.5:Section&diff=101023&oldid=101018 | 2015-04-18T07:27:41 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.joomla.org |
. 178 (1987)
Up
Up
76 Op. Att'y Gen. 178, 178 (1987)
Conflict Of Interest; Contracts; County Board; Public Officials;
Section 946.13, Stats., which prohibits private interests in public contracts, applies to county board or department purchases aggregating more than $5,000 from a county supervisor-owned business. OAG 42-87
August 21, 1987
76 Op. Att'y Gen. 178, 178 (1987)
Gary J. Schuster
,
District Attorney
Door County
76 Op. Att'y Gen. 178, 178 (1987)
You have asked for my opinion whether section 946.13(1), Stats., is violated under the following situations.
76 Op. Att'y Gen. 178, 178 (1987)
In the first case, a county board supervisor owns a small business that is sufficiently large that he is not on the premises at all times. Throughout the year, various county departments purchase from the store items that are budgeted for but which do not require bids. County policy does not require prior committee approval before the purchases, which are not immediately paid for. For payment, vouchers are submitted to the finance committee and then to the entire county board for review and approval. The owner of the store sits on the finance committee and votes on the approval of the vouchers. Almost inevitably, the vouchers for all purchases, including those from other stores as well as the supervisor's, are approved unanimously. During the year, the aggregate of county purchases from the supervisor's store exceeds $5,000.
76 Op. Att'y Gen. 178, 178 (1987)
In the second case, another supervisor owns a printshop from which various county departments purchase stationery, books, directories, etc., for a total amount of less than $5,000 a year. In addition, however, the county board negotiates and enters into a contract with this supervisor for the printing of the county's business stationery and directories. The amount of this contract is barely under $5,000. The total of the contract and the purchases exceeds $5,000. The supervisor does not absent himself from meetings or voting on approval of vouchers.
76 Op. Att'y Gen. 178, 178 (1987)
The relevant parts of section 946.13 provide:
76 Op. Att'y Gen. 178, 178 (1987)
Private interest in public contract prohibited. (1) Any public officer or public employe who does any of the following is guilty of a Class E felony:
76 Op. Att'y Gen. 178, 178-179 (1987)
(a)
In his private capacity, negotiates or bids for or enters into a contract in which he has a private pecuniary interest, direct or indirect, if at the same time he is authorized or required by law to participate in his capacity as such officer or employe in the making of that contract or to perform in regard to that contract some official function requiring the exercise of discretion on his part; or
76 Op. Att'y Gen. 178, 179 . 178, 179 (1987)
(2)
Subsection (1) does not apply to the following:
76 Op. Att'y Gen. 178, 179 (1987)
(a)
Contracts in which any single public officer or employe is privately interested which do not involve receipts and disbursements by the state or its political subdivision aggregating more than $5,000 in any year.
76 Op. Att'y Gen. 178, 179 (1987)
In my opinion, both supervisors are in violation of subsections (a) and (b) of section 946.13(1).
76 Op. Att'y Gen. 178, 179 (1987)
The supervisors are violating subsection (b) because in their official capacity they are performing in regard to contracts in which they have a pecuniary interest, a function requiring the exercise of their discretion when they vote on the vouchers that approved the payment to their respective businesses. Voting to approve the vouchers is a function performed in regard to the contract.
See
24 Op. Att'y Gen. 422, 423 (1935), in which it was stated that an officer in his official capacity acts upon a contract when he, among other things, is under legal obligation to "pay the bill for the services or goods or to approve the purchase price and order warrants to be drawn in payment thereof."
76 Op. Att'y Gen. 178, 179 (1987)
The supervisors can avoid violation of subsection (b) by abstaining from voting on the vouchers related to their respective businesses. Because actual participation in one's official capacity is required to violate section 946.13(1)(b), my predecessors and I have concluded that violation of subsection (b) can be avoided by not participating in the making of the contract or in the performance of a function requiring the exercise of the official's discretion.
See
76 Op. Att'y Gen. 92 (1987); 76 Op. Att'y Gen. 15 (1987); 75 Op. Att'y Gen. 172 (1986); and 63 Op. Att'y Gen. 43 (1974).
76 Op. Att'y Gen. 178, 179-180 (1987)
The only way for the supervisors to do business with the county and avoid violating subsection (a), however, is to make sure that their sales to the county do not exceed a total of $5,000 in a year. The three elements required for violation of subsection (a) are:
76 Op. Att'y Gen. 178, 180 (1987)
.
76 Op. Att'y Gen. 178, 180 (1987)
75 Op. Att'y Gen. at 173.
76 Op. Att'y Gen. 178, 180 (1987)
All three elements are satisfied in the situations you presented. The supervisors certainly have a pecuniary interest in the county's purchase of products from their respective businesses. The supervisors participated in their private capacities in the county's purchases because the county made the purchases from the supervisors' businesses. It would be immaterial if the business employe directly involved in the sale was someone other than the owner-supervisor. 4 Op. Att'y Gen. 205 (1915). When the sale was made, the supervisor participated through the actions of his or her agent. 75 Op. Att'y Gen. at 173. The third element would have been satisfied even if the supervisors had abstained from voting on the vouchers since the third element does not require actual participation in one's official capacity. This element is satisfied as long as the supervisor has the authority to act in regard to the contract in his or her official capacity. Because the supervisor has the authority to approve the vouchers for payment to his or her business, the third element is satisfied.
76 Op. Att'y Gen. 178, 180 (1987)
The dollar limit imposed by section 946.13(2) must always be kept in mind, however. Even if the supervisor participates in both his or her private and official capacities, there is no violation of the statute unless the receipts or disbursements by the county in regard to the individual supervisor exceed $5,000 for the year. Therefore the supervisors can continue to do business with the county as long as that $5,000 limit is not exceeded.
76 Op. Att'y Gen. 178, 180-181 (1987)
The conclusions reached in this opinion might be viewed by some as placing both the supervisors and the county at a disadvantage by limiting the possible parties with which each can do business. My conclusions, however, are consistent with earlier opinions, which advised that officials could not exceed the applicable dollar limit in doing business with their respective jurisdictions.
See
34 Op. Att'y Gen. 430 (1945); 25 Op. Att'y Gen. 357 (1936); 25 Op. Att'y Gen. 308 (1936); 24 Op. Att'y Gen. 312 (1935); 22 Op. Att'y Gen. 262 (1933); 21 Op. Att'y Gen. 537 (1932); 18 Op. Att'y Gen. 329 (1929); and 4 Op. Att'y Gen. 205 (1915). The Legislature has tried to reduce the disadvantage by raising the dollar limit over the years to the current $5,000 figure.
76 Op. Att'y Gen. 178, 181 (1987)
DJH:SWK
___________________________
Down
Down
/misc/oag/archival/_285
false
oag
/misc/oag/archival/_285
oag/vol76-178
oag/vol76-178
section
true
»
Miscellaneous Documents
»
Opinions of the Attorney General
»
Opinions of the Attorney General - prior to 2000
»
76 Op. Att'y Gen. 178 (1987)
×
Details for
PDF view
Link
(Permanent link)
Bookmark this location
View toggle
Go to top of document
Cross references for section
Acts affecting this section
References to this
1970 Statutes Annotations
Appellate Court Citations
Reference lines
Clear highlighting | https://docs.legis.wisconsin.gov/misc/oag/archival/_285 | 2015-04-18T07:17:19 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.legis.wisconsin.gov |
JSessionStorageDatabase::write
From Joomla! Documentation
Revision as of::write
Description
Write session data to the SessionHandler backend.
Description:JSessionStorageDatabase::write [Edit Descripton]
public function write ( $id $data )
- Returns boolean True on success, false otherwise.
- Defined on line 83 of libraries/joomla/session/storage/database.php
- Since
See also
JSessionStorageDatabase::write source code on BitBucket
Class JSessionStorageDatabase
Subpackage Session
- Other versions of JSessionStorageDatabase::write
SeeAlso:JSessionStorageDatabase::write [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=API17:JSessionStorageDatabase::write&direction=next&oldid=57651 | 2015-04-18T08:23:38 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.joomla.org |
Difference between revisions of "Adding a new Poll"
From Joomla! Documentation
Latest revision as of 18:16, 26 May 2013
The Joomla! Poll Manager allows you to create polls using the multiple choice format on any of your Web site pages. They can be either published in a module position using the Poll module or in a menu item using the Poll component.
Contents.
Parameters: Module Parameters
- Select the Poll from the drop down list
- Type the CSS class (if needed) that is included with your CSS file (Cascading Style Sheet). This class is determined by the author of your template.
Advanced Parameters
- Select Global or Cashing from the drop down Cashing menu.
- Global is the setting you have for this module in the Global Configurations section of your Administrator site. (??? to verify)
- Cashing is the setting for (??? What is this for?)
- Click the Save or Apply toolbar button to implement the new settings:
You can now visualize your work in the Front-end of your site.
How to Publish Your Poll Results as a Menu Item
- Click the Menu> Mainmenu (or other menu) menu item to view the Menu Item Manager: [mainmenu] screen.
- Click the New toolbar button to create a new menu item.
- Select Poll> Poll Layout from the list of Select Menu Item Type
Menu Item Details:
- Type the title of your poll in the Title field.
- Type the abbreviated title in the Alias field.
- Leave the Link field as is. (??? - to verify)
- Select the menu from the Display In drop down menu that you wish to present your poll results.
- Select the parent/child menu item as to where you wish your menu item to be located in the Parent Item drop down menu.
- Select the No or Yes radio button to publish/unpublish your new menu item to your site.
- New Menu Items default to the last place. Ordering can be changed after this Menu Item is saved in the Order drop down menu.
- Select the Access Level as to who is able to see this module on your Web site.
- Click from the On Click, Open in: items:
- Parent Window with Browser Navigation (creates the link within your site with browser navigation)
- New Window with with Browser Navigation (creates an external link with browser navigation)
- New Window without Browser Navigation (creates an external link without browser navigation)
Parameters - Basic:
- Select the poll from the Poll drop down menu.
Parameters - System
- Type (if needed) the page title. (If left blank, the menu item title will be used)
- Select the No or Yes radio button to publish/unpublish your page title.
- Type the CSS class of your page if different from the standard CSS class for pages.
- Select from the Menu Image drop down menu an image that goes to the left or right of your menu item.
- Select the SSL Enabled radio button Off, Ignored or On. This selects whether or not this link should use SSL and the Secure Site URL.
You can now visualize your work in the Front-end of your site. | https://docs.joomla.org/index.php?title=J1.5:Adding_a_new_Poll&diff=99581&oldid=8207 | 2015-04-18T07:32:40 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.joomla.org |
Changes related to "Category:Needs technical review"
← Category:Needs technical review
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
15 April 2015
09:12Secure coding guidelines (diff; hist; +4) Chris Davenport
| https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&limit=100&target=Category%3ANeeds_technical_review | 2015-04-18T07:27:27 | CC-MAIN-2015-18 | 1429246633972.52 | [array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object) ] | docs.joomla.org |
Turbulence Modifier
Summary
This modifier applies turbulence to particle movement. The effect of the various options can be quite subtle. The best way to see what they do is to create a pair of emitters with identical settings, and link a separate turbulence modifier and trail object to each emitter. Now you can leave one modifier unchanged and alter the settings on the other to see the effect. If you use a different XP material to render each trail object, you can get a clear comparison between the two modifiers..
Noise Type
Note: if the particle speed is zero, not all of these noise types will cause the particles to start moving. Turbulence, Wavy Turbulence and fBm all require that the particle has an initial speed; if the speed is zero, these modes have no effect. Standard, Curl, and Perlin noise types will, however, work even if the initial particle speed is zero.
This drop-down has six options:
Standard
This gives the same turbulence effect as the standard Cinema 4D modifier.
Turbulence, Wavy Turbulence and fBm
These add different types of turbulence to the particle stream. Generally, Wavy Turbulence gives a smoother effect but this and Turbulence are fairly similar.
fBm (fractal Brownian motion) produces a more irregular, chaotic movement, especially at lower scale and higher octaves (e.g. try a Scale of 50% and Octaves of 5).
Curl
Selecting 'Curl' brings up the following settings:
Offset
This setting offsets the noise field used by Curl (results in a different curl velocity).
Blend
The setting is a blend between the particle's velocity and the curl velocity.
Add
This is the percentage of the curl velocity to add to the particle's velocity.
Distance
This setting controls the curl noise field distance from the sources.
Sources
Drop curl noise sources into this list (these can be geometry, other particles, particle groups or an emitter).
Distance
This is a falloff from the distance over which the velocity field of the curl is affected (squashed to reduce the velocity along the normal to the surface).
Drag
This setting slows the velocity field around the surface (tangential, so over the surface).
Boundaries
Drag geometry objects (primitives or polygons only) into this list to act as boundaries that "squash" the curl velocity field (it becomes tangential to the surfaces).
Simplex (Perlin) Noise
This is an advanced version of the original Perlin noise. For this noise it is recommended that 3-8 octaves and a strength of 30 or higher are used. With lower values the effects are much reduced.
Selecting this option adds these parameters to the interface:
Type
This is the simplex noise type. You can choose between 'Raw', which is the original Perlin noise, or 'MultiOctave', which is more advanced and can produce better results.
Persistence
Only used in the 'MultiOctave' noise. The higher the value, the more of each octave is added into the noise, and gives a smoother result.
Turbulence Active on Axis
By default the turbulence effect works on all three axes. You can choose which axes the effect should work on by checking or unchecking the 'X axis', 'Y axis', and 'Z axis' switches.
Scale
In Standard mode this has the same effect as in the standard Cinema 4D turbulence modifier. In the other modes, larger scales tend to damp down the movement of the particle, giving smoother movement. By contrast a smaller scale will produce more chaotic movement.
Frequency
This is only available in Standard and Curl modes. It changes the frequency of the internal noise generator used by the modifier, so the higher the value, the more rapidly the noise changes.
Octaves
This parameter influences the frequency of the turbulence. It is not available for the 'Standard' turbulence mode. A value of zero will result in no change in direction; larger values will result in more frequent and dramatic changes.
Strength
As you might expect, the higher the strength value the greater the effect, that is, the greater the change in particle velocity (speed and direction).
Periodic Variation
Only available in 'Turbulence' noise type.. Try a value of 2 or 4 and watch the particle stream - after a certain amount of time, you see that the stream repeats itself. | http://docs.x-particles.net/html/turbmod.php | 2021-10-16T03:28:04 | CC-MAIN-2021-43 | 1634323583408.93 | [array(['../images/turbmod_1.jpg', None], dtype=object)
array(['../images/turbmod_2.jpg', None], dtype=object)
array(['../images/turbmod_3.jpg', None], dtype=object)] | docs.x-particles.net |
Additional information for multi-factor authentication
Contributors
Download PDF of this page
You should be aware of the following caveats in relation to multi-factor authentication.
In order to refresh IdP certificates that are no longer valid, you will need to use a non-IdP admin user to call the following API method:
UpdateIdpConfiguration
MFA is incompatible with certificates that are less than 2048 bits in length. By default, a 2048-bit SSL certificate is created on the cluster. You should avoid setting a smaller sized certificate when calling the API method:
SetSSLCertificate
IdP admin users cannot be used to make API calls directly (for example, via SDKs or Postman) or used for other integrations (for example, OpenStack Cinder or vCenter Plug-in). Add either LDAP cluster admin users or local cluster admin users if you need to create users that have these abilities. | https://docs.netapp.com/us-en/element-software/storage/concept_system_manage_mfa_additional_information_for_multi_factor_authentication.html | 2021-10-16T04:02:31 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.netapp.com |
Custom configuration options¶
OpenVPN offers dozens of configuration options, many beyond the most commonly used fields presented in the GUI. This is why the Advanced configuration box exists. Additional configuration options may be configured using this input area, separated by semicolons.
This section covers the most frequently used custom options individually. There are many more, though rarely needed. The OpenVPN man page details them all.
Warning
Exercise caution when adding custom options, there is no input validation applied to ensure the validity of options used. If an option is used incorrectly, the OpenVPN client or server may not start. View the OpenVPN logs under Status > System logs on the OpenVPN tab to ensure the options used are valid. Any invalid options will result in a log message, followed by the option that caused the error:
Options error: Unrecognized option or missing parameter(s)
Routing options¶
To add additional routes for a particular OpenVPN client or server, use the Local Network and Remote Network boxes as needed, using a comma-separated list of networks.
The
route custom configuration option may also be used, but is no longer
necessary. Some users prefer this method, however. The following example adds a
route for
10.50.0.0/24:
route 10.50.0.0 255.255.255.0;
To add multiple routes, separate them with a semicolon:
route 10.50.0.0 255.255.255.0; route 10.254.0.0 255.255.255.0;
The
route configuration option is used to add routes locally for networks
that are reachable through the VPN. For an OpenVPN server configuration using
PKI, additional routes may also be pushed to clients. The GUI can configure
these using the Local Network field. To push the routes manually for
10.50.0.0/24 and
10.254.0.0/24 to all clients, use the following custom
configuration option:
push "route 10.50.0.0 255.255.255.0"; push "route 10.254.0.0 255.255.255.0";
Redirecting the default gateway¶
OpenVPN also allows the default gateway to be redirected across the VPN, so all non-local traffic from the client is sent through the VPN. This is great for untrusted local networks such as wireless hotspots, as it provides protection against numerous attacks that are a risk on untrusted networks. This is configurable in the GUI now, using the Redirect Gateway checkbox in the OpenVPN instance configuration. To do this manually, add the following custom option:
push "redirect-gateway def1"
The same value may be used as a custom option on the client side by entering
redirect-gateway def1 without specifying
push . (Note the option is the
letters “def” followed by the digit one, not the letter “L”.) | https://docs.netgate.com/pfsense/en/latest/vpn/openvpn/configure-custom.html | 2021-10-16T03:20:54 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.netgate.com |
Addressing Meridian Meridian.. Meridian Meridian gained orders of magnitude in increased scalability. Java Runtimes, based on the Sun JVM, have provided implementations for processor based atomic operations and is the basis for Meridian’ non-blocking concurrency algorithms. Provisioning Policies Just because you can, doesn’t mean you should! Because the massively parallel operations Provisiond creates allow rapid discovery and persistence of tremendous numbers of nodes, interfaces, and servicices, doesn’t mean it should. A policy API for Provisiond lets you develop implementations that can control the behavior of Provisiond. This includes a set of flexible provisioning policies that control the persistence of entities with attributes that Meridian admin user alters the default from the Provisioning Groups WebUI. Upon edit, this template is exported to the Meridian. Meridian will not attempt to rescan the nodes in the requisition unless you trigger a manual (forced) rescan through the web UI or Provisioning ReST API. | https://docs.opennms.com/meridian/2021.1.1/operation/provisioning/scalability.html | 2021-10-16T03:29:07 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.opennms.com |
Resource Explorer
Resource Explorer allows you to directly manage and edit Kubernetes YAMLs that have been applied to the cluster. This is a direct and a safer alternative for Kubernetes Dashboard.
This was developed by the Platformer team to quickly have a look around on what’s running on the cluster regardless whether the resources have been applied by Platformer Console or not.
Before you begin¶
You need to have a Kubernetes Cluster connected with the Platformer Console. If you don’t please refer the Cluster connection guide here.
Managing Resources¶
You can easily navigate through the below UI and edit, delete and view resources in your Kubernetes Cluster.
- On the left column, you can see the available resources types and API Versions.
- You can easily filter resources through namespace dropdown on the right top control and also use the search bar to search resources.
Updating Resources¶
You can easily edit and apply a YAML on the go as shown in the below example.
Caution
If the particular resource is created by Platformer Console itself, it is not recommended to edit the YAML here. Rather, please click the Go To Application button and edit the YAML there inside the application.
Known Issues¶
Platformer Console uses
kubectl api-versions endpoint to retrieve the available resources in the Cluster to generate the resource bar dynamically. We have identified on certain clusters, (specially AWS managed EKS clusters) doesn’t support the above endpoint.
We will be releasing a patch for this soon in a future release.
Alternatives¶
- Kubernetes Dashboard as explained earlier is the popular alternative out there at the moment.
- Octant by VMWare is another great tool if you want more extensibility.
These tools can be easily deployed to your cluster through Platformer Catalogs. | https://docs.platformer.com/user-guides/clusters/06-resource-explorer/ | 2021-10-16T03:27:32 | CC-MAIN-2021-43 | 1634323583408.93 | [array(['/assets/images//docs/cluster-resource-1.png', None], dtype=object)
array(['/assets/images//docs/cluster-resource-1.png', None], dtype=object)] | docs.platformer.com |
Error message guidelines#
Developers often ask an Information Developer to edit error messages that they have written.
When you edit error messages, use the writing guidelines in this Style Guide as you would with other documentation that is curated by the Information Development team. Start with the information in the Quickstart.
This topic provides additional guidelines specific to reviewing messages for Rackspace development teams.
This information is also available in the Helix documentation.
General guidelines#
When you review error messages, consider the following guidelines for them:
Be courteous and do not blame the user.
Use present tense to describe conditions that currently exist, or use past tense to describe a specific event that occurred in the past.
Where possible, guide users with the imperative voice (for example, “Enter a valid email.”) or the active voice (such as “The Control Panel is not responding.”).
Error messages must be short, accurate, complete, and helpful.
Message guidelines and examples#
Use the following guidelines when you review messages written for a control panel. | https://docs.rackspace.com/docs/style-guide/error-message-guidelines | 2021-10-16T03:38:08 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.rackspace.com |
P.O. Box Restrictions
Bolt provides a setting to disable shipment to P.O. boxes on the Bolt Merchant Dashboard via Checkout Settings.
About the P.O. Box Filter
Bolt’s P.O. Box filter relies on Lob or Avalara, depending upon which solution you are using.
- Lob/Avalara tells Bolt the address is a P.O. box.
- Bolt blocks the order.
- The customer is be shown a notice. The notice states that shipment is not available to P.O. Boxes.
| https://docs.bolt.com/merchants/references/checkout-features/po-box-restrictions/ | 2021-10-16T02:42:01 | CC-MAIN-2021-43 | 1634323583408.93 | [array(['/images/po-box-restrictions/po-box-restriction-notice.png',
'PO Box Restriction Notice'], dtype=object) ] | docs.bolt.com |
Positioning and Stacking
By default, the Notification creates popup boxes which overlay the rest of the page content.
If no position settings are defined, the first popup will be displayed near the bottom-right corner of the browser viewport and subsequent popups will stack upwards.
You can independently control the positioning and stacking of the Notifications. If you do not define any stacking setting, the popups stack upwards or downwards depending on the positioning settings. For example, popups which display at the top of the viewport stack downwards and vice versa. Explicitly defining stacking is mostly needed when you need a more specific stacking direction, for example, leftwards or rightwards.
By default, popups are pinned, that is, when the page is scrolled, popups do not move. This behavior is achieved by applying a
position:fixed style to the popups. When the popups are not pinned, they use
position:absolute.
If the popup content varies and stacking is likely to occur, explicitly define dimensions so that the popups are aligned and look better when stacked next to one another.
The following example demonstrates how to manage the position, stacking, and size of the Notification.
@(Html.Kendo().Notification() .Name("notification") .Position(p => { // The Notification popup will scroll together with the other content. p.Pinned(false); // The first Notification popup will appear 30px from the top and right edge of the viewport. p.Top(30); p.Right(30); }) // New notifications will appear under old ones. .Stacking(NotificationStackingSettings.Down) // Set the appropriate size. .Width(300) .Height(50) )
You may need the popup notifications to appear too quickly or to implement so many Notifications on the screen that the available space gets very little. In such cases, the subsequent popups appear outside the visible viewport area and are inaccessible if they are pinned. In such cases, consider using a shorter hiding delay or implementing static notifications for better usability.
Notifications can also display static messages which do not overlay other elements but take part in the normal flow of the page content instead. In this case, the positioning settings are ignored. Stacking can be downwards (by default) or upwards. Static notifications are displayed if you specify a target container. A single Notification instance can display either popup or static notifications but not both types at the same time.
The following example demonstrates how to enable static notifications.
@(Html.Kendo().Notification() .Name("notification") // Insert all notifications to the originating element of the Notification. .AppendTo("#notification") // New notifications will appear over old ones. .Stacking(NotificationStackingSettings.Up) ) | https://docs.telerik.com/aspnet-core/html-helpers/layout/notification/positioning-stacking | 2021-10-16T03:41:30 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.telerik.com |
Get map events. A developer must be aware of certain events that can occur under degenerative conditions in order to cleanly handle it. The most important event to be aware of is when a map changes. In the case that a new map session begins, or recovery fails, all formerly cached transform and world reconstruction data (raycast, planes, mesh) is invalidated and must be updated.
Target is Magic Leap HMDFunction Library
Map Events
Lost
Return Value
Inputs
Outputs | https://docs.unrealengine.com/4.27/en-US/BlueprintAPI/MagicLeap/GetHeadTrackingMapEvents/ | 2021-10-16T04:00:37 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.unrealengine.com |
Class: Aws::DocDB::Types::InvalidDBParameterGroupStateFault
- Inherits:
- EmptyStructure
- Object
- Struct::AwsEmptyStructure
- EmptyStructure
- Aws::DocDB::Types::InvalidDBParameterGroupStateFault
- Defined in:
- gems/aws-sdk-docdb/lib/aws-sdk-docdb/types.rb
Overview
The parameter group is in use, or it is in a state that is not valid. If you are trying to delete the parameter group, you can't delete it when the parameter group is in this state. | https://docs.amazonaws.cn/sdk-for-ruby/v3/api/Aws/DocDB/Types/InvalidDBParameterGroupStateFault.html | 2021-10-16T02:51:23 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.amazonaws.cn |
public interface AuthorizationSupport
JiraWebActionSupport
boolean hasPermission(int permissionsId)
hasGlobalPermission(com.atlassian.jira.permission.GlobalPermissionKey)instead. Since v6.4.
permissionsId- the permission type
boolean hasGlobalPermission(GlobalPermissionKey globalPermissionKey)
This method is intended to be used in Java code. If you are using JSP / Velocity / Soy Templates, it is
probably easier to call
hasGlobalPermission(String) instead.
globalPermissionKey- the permission to check
hasGlobalPermission(String)
boolean hasGlobalPermission(String permissionKey)
This method is intended to be used in JSP / Velocity / Soy Templates. If you are using Java directly, it is
recommended to call
hasGlobalPermission(com.atlassian.jira.permission.GlobalPermissionKey) instead.
Note that this method takes a Global Permission Key, which is a different value to the old "permission name"
that some previous methods would accept - see
GlobalPermissionKey for correct values of the system
permissions.
permissionKey- the permission to check
hasGlobalPermission(com.atlassian.jira.permission.GlobalPermissionKey)
boolean hasIssuePermission(String permissionKey, Issue issue)
This method is intended for use in Velocity templates / JSPs etc. Within Java code you should prefer the method that takes a ProjectPermissionKey.
Note that this method takes a Permission Key, which is a different value to the old "permission name" that
some previous methods would accept - see
ProjectPermissions for correct
values of the system permissions.
permissionKey- the permission key as a String
issue- the Issue
hasIssuePermission(com.atlassian.jira.security.plugin.ProjectPermissionKey, com.atlassian.jira.issue.Issue)
boolean hasIssuePermission(int permissionsId, Issue issue)
hasIssuePermission(com.atlassian.jira.security.plugin.ProjectPermissionKey, com.atlassian.jira.issue.Issue)instead. Since v6.4.
permissionsId- the permission type
issue- the Issue
boolean hasIssuePermission(ProjectPermissionKey projectPermissionKey, Issue issue)
projectPermissionKey- the permission to check
issue- the Issue
hasIssuePermission(String, com.atlassian.jira.issue.Issue)
boolean hasProjectPermission(int permissionsId, Project project)
hasProjectPermission(com.atlassian.jira.security.plugin.ProjectPermissionKey, com.atlassian.jira.project.Project)instead. Since v6.4.
permissionsId- the permission type
project- the Project
boolean hasProjectPermission(ProjectPermissionKey projectPermissionKey, Project project)
projectPermissionKey- the permission to check
project- the project | https://docs.atlassian.com/software/jira/docs/api/8.13.6/com/atlassian/jira/web/util/AuthorizationSupport.html | 2021-10-16T03:54:40 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.atlassian.com |
What is an AZW3 file?
AZW3, also known as Kindle Format 8 (KF8), is the modified version of the AZW ebook digital file format developed for Amazon Kindle devices. The format is an enhancement to older AZW files and is used on Kindle Fire devices only with backward compatibility for the ancestor file format i.e. MOBI and AZW. Amazon introduced KFX (KF version 10) format after KF8 that is used on the latest Kindle devices. AZW3 files have internet media type application/vnd.amazon.mobi8-ebook. AZW3 files can be converted to a number of other file formats such as PDF, EPUB, AZW, DOCX, and RTF.
AZ3/KF8 File Format
KF8 files are binary in nature and retain the structure of a MOBI file format as PDB file. As mentioned earlier, a KF8 file may consist of both a MOBI as well a newer KF8 version of EPUB later. The internal details of the format have been decoded by Kindle Unpack, which is a Python script that parses the final compiled database and extracts MOBI or AZW source files from it. AZW3 (KF8) files target EPUB3 version with the backward compatibility for EPUB as well. KF8 compiles the EPUB files and generates a binary structure based on the PDB file format. | https://docs.fileformat.com/ebook/azw3/ | 2021-10-16T02:21:22 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.fileformat.com |
Katalon Compact Utility
Requirements:
- Katalon Studio version 8.0.0 onwards.
- A Chrome Profile. Find more information here: Share Chrome with others.
In restricted environments, unpacked extensions are disabled as a security feature. In that case, using the Spy, Record, and Smart Wait with Chrome might prompt this error: "Loading the unpacked extension is disabled by the administrators."
It is possible to use the packed extension Katalon Compact Utility as an alternative. The extension is available on Chrome Web Store.
This article will show you how to install the extension, configure your Profile, and use the Spy, Record, and Smart Wait in Katalon Studio.
Notes:
This utility is associated with your Chrome Profile, which means you can only have one active session at any given time.
Installing Katalon Compact Utility
Open Chrome. Make sure you use the Profile you want to use the Spy, Record, or Smart Wait with.
Navigate to the Chrome Web Store page for this extension: Katalon Compact Utility.
Download and install Katalon Compact Utility.
Make sure the extension is now active. You can find Google Support instructions to manage your Chrome extensions here: Install and Manage Extensions.
Configuring and Using the Compact Utility with Chrome Profile
The next steps will help you associate your Chrome Profile with the Spy, Record, and Smart Wait functions in Katalon Studio.
Finding your Chrome Profile in the User Data Directory
There are multiple Profiles in a given User Data Directory. This section will help you find the name of the correct one.
Open Chrome with the Profile you previously used to install Katalon Compact Utility. In the address bar, type
chrome://versionand press Enter.
The line Profile Path: now displays the path to your active Profile. For example:
C:\Users\your_username\AppData\Local\Google\Chrome\User Data\your_profile_name.
Copy your profile name.
Close Chrome.
Configuring and using Katalon Compact Utility with Chrome Profile
To configure and use Katalon Compact Utility, you need to update the Desired Capabilities. Do as follows:
Go to Project > Settings > Desired Capabilities.
In the Desired Capabilities section, select Web UI > Chrome.
Click Add to create a new capability named
args, for which the type is
List.
To add elements to your list, in the Value column of the capability you've created, click on the
...button.
The List Properties Builder dialog appears. Click Add to create two elements as below:
Execute feature by default Chrome option.
Notes:
Before executing, make sure you log out of all your Chrome sessions. This extension currently does not support multiple sessions.
See also: | https://docs.katalon.com/katalon-studio/docs/katalon-compact-utility.html | 2021-10-16T02:00:07 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.katalon.com |
Service Brokers
To manage Service Brokers that are available in the system go to the Service Brokers section of the Administration area.
All available Service Brokers are listed here, even the private ones. This provides administrators and partners with an overview about all Service Brokers in the system.
Approve Service BrokerApprove Service Broker
When a Customer publishes a Service Broker, an administrator or partner has to approve the Service Broker first. To approve a Service Broker, the check button for the according Service Broker must be clicked. After approval, the services of the approved Service Broker will be available for all Users in the according Location's Marketplace. | https://docs.meshcloud.io/docs/administration.service-brokers.html | 2021-10-16T01:52:56 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.meshcloud.io |
Introduction
The Platformer Console is a hassle-free enterprise-grade application platform to simplify and streamline your Kubernetes experience on Cloud, Hybrid, On-Premise or Edge Infrastructure.
This directory contains a catalog of examples on how to run, configure and scale applications with Platformer Console. Please review the user guide before trying them. | https://docs.platformer.com/tutorials/ | 2021-10-16T02:09:21 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.platformer.com |
Using Terraform¶
What is Terraform?¶
“Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.” – [Introduction to Terraform]()
Making changes with Terraform¶
Rackspace strongly recommends that all changes be made through CI/CD
tooling, and Terraform should not be run locally except in extreme
cases, especially
terraform apply. Because all repositories have a
.terraform-version file, and are named appropriately for a specific
AWS account, our tooling will ensure that the correct version of Terraform
is executed against the correct account.
As mentioned in the Using GitHub section of this documentation, there is also a shared repository for Terraform modules you may wish to reuse across multiple accounts. Rackspace will create “GitHub release” objects in this repository, which automatically makes tags that we can use to reference specific modules at specific points in time.
Please see the later part of this document for specific standards Rackspace recommends when creating Terraform modules and Terraform configuration.
Where is Terraform state stored?¶
Rackspace maintains a separate S3 bucket for storing the Terraform state
of each AWS account. Access to this bucket and its contents is restricted
to the
Rackspace role that Rackspace maintains on every AWS account. By
default, temporary credentials retrieved from your control panel will use
the Rackspace role and automatically have access to this bucket, should
you need it. You can read more about the Rackspace role in the
AWS account defaults section of this documentation.
In addition, Rackspace stores these buckets in an entirely isolated AWS account, and implements best practices such as requiring bucket encryption, access logging, versioning, and preventing accidental public access. Because S3 bucket names are globally unique, and as part of a defense-in-depth strategy, we choose an arbitrary, opaque name for this bucket that cannot be mapped back to an AWS account. We provide the bucket name in the logged output from each CI/CD job, as well as the full terraform commands we run, should you want to inspect it or use it to run Terraform locally.
Grouping state into layers¶
There are a few different designs employed in the Terraform community for how to structure your Terraform files, modules, and directories. The community around Terraform has written blog posts and spoken at HashiConf about how these patterns evolve over time; many of the recommendations focus on how to best group Terraform state. At Rackspace, we’ve built upon the existing best practices (e.g. ‘module-per-environment’) and created a concept we call layers in order to isolate Terraform state.
What is a layer?¶
Put simply, a layer is a directory that is treated as a single Terraform configuration. It is a logical grouping of related resources that should be managed together by Terraform. Layers are placed in the layers/ directory inside an Account Repository. Our automation will perform all of the usual Terraform workflow steps (init, plan, apply) on each layer, alphabetically.
In collaboration with experienced Rackers, you should carefully consider how to logically group the state of your AWS resources into layers; layers could represent environments like production or test, regions your application may be hosted in, application tiers like “database” or “web servers,” or even applications that share availability requirements.
Here are some considerations that Rackers will discuss with you when planning out your environment:
ensure resources frequently modified together are also grouped together in the same layer
keep layers small, to limit blast radius and ensure refreshing state is quick/safe
keep dependencies between layers simple, as changes must take dependencies into consideration manually
consider reading state from another layer, using a data source; never write to another layer’s state
for small environments, consider that a single layer may acceptable, but moving resources between layers is hard
Writing and organizing Terraform with modules¶
Generally,
Rackspace maintains modules for most common use cases,
and uses these modules to build out your account. If we do not have a
pre-existing module, the next best choice is to use the built-in
aws_* resources offered by the AWS provider for Terraform. Please
let us know if we don’t have a module or best practice for building out
a specific resource or AWS product.
A common recommendation in the Terraform community is to think of modules as functions that take an input and provide an output. Modules can be built in your shared repository or in your account repositories. If you find yourself writing the same set of resources or functionality over and over again, consider building a module instead.
When to consider writing a module:
When multiple resources should always be used together (e.g. a CloudWatch Alarm and EC2 instance, for autorecovery)
When Rackspace has a strong opinion that overrides default values for a resource
When module re-use remains shallow (don’t nest modules if at all possible) for all resource names
Declare all variables in variables.tf, including a description and type
Declare all outputs in outputs.tf, including a description
Pin all modules and providers to a specific version or tag
Always use relative paths and the file() helper
Prefer separate resources over inline blocks (e.g. aws_security_group_rule over aws_security_group)
Always define AWS region as a variable when building modules
Prefer variables.tf over terraform.tfvars to provide sensible defaults
Terraform versions and provider versions should be pinned, as it’s not possible to safely downgrade a state file once it has been used with a newer version of Terraform
Rackspace Terraform Module Standards¶
Rackspace maintains a number of Terraform modules available at. Contributions should follow these guidelines.
use semantic versioning for shared code and modules
always point to GitHub releases (over a binary or master) when referencing external modules
always extend, don’t re-create resources manually
parameters control counts, for non-standard numbers of subnets/AZs/etc.
use overrides to implement Rackspace best practices
use variables with good defaults for things Rackspace expects to configure
Modules should use semantic versioning light (Major.minor.0) for AWS account repositories
Modules should be built using the standard files:
main.tf, variables.tf, output.tf
Consider writing tests and examples, and shipping them in directories of the same name
Readme files at should contain a description of the module as well as documentation of variables. An example of documentation can be found here.
The files in
.circleciare managed by Rackspace and should not be changed. If you would like to submit a module, please do so without this folder.
The files in
examplecan be named anything as long as they have
.tfas the extension.
The
testsdirectory must be called
testsand each test must be
test#`. Inside each test# folder should be exactly one file called
main.tf
Use Github’s .gitignore contents for Terraform.
variables.tf¶
This file must include the following code block at the beginning or end of the file.
variable "environment" { description = "Application environment for which this network is being created. one of: ('Development', 'Integration', 'PreProduction', 'Production', 'QA', 'Staging', 'Test')" type = "string" default = "Development" } variable "tags" { description = "Custom tags to apply to all resources." type = "map" default = {} }
main.tf¶
This file must include the following code block at the top of the file. Other variables can be added to this block.
locals { tags { Name = "${var.name}" ServiceProvider = "Rackspace" Environment = "${var.environment}" } }
In any resource block that supports
tags the following code should
be used:
tags = "${merge(var.tags, local.tags)}"
This takes the tag values that are in
variable.tf and combines them
with any values defined in
main.tf in the
locals block.
Secrets storage using Terraform¶
Rackspace recommends storing secrets for Terraform using AWS KMS; embed ciphertext values as data sources in Terraform configurations. Here’s some of the specifics and considerations:
Use
aws_kms_keyto will need to manually use the AWS CLI (and the key-id for the key you created in the previous step) to encrypt your secrets (mind any line endings if you). | https://docs.rackspace.com/docs/fanatical-support-aws/managed-infra-as-code/using-terraform | 2021-10-16T01:46:57 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.rackspace.com |
A (third) proposal for implementing some date/time types in NumPy¶]')
The default is microseconds if no time unit is specified.]
or by applying broadcasting:
numpy.array(['1979', '1980'], 'M8[Y]') == numpy.datetime64('1980', 'Y') --> [False, True]
The next should work too:.
Time units¶
It accepts different time units, each of them implying a different time span. The table below describes the time units supported with their corresponding time spans.
The value of a time delta is thus]')
The default is micro-seconds if no default is specified: ‘m8’ is equivalent to ‘m8[us]’]
Final considerations¶. | https://docs.scipy.org/doc/numpy-1.14.0/neps/datetime-proposal3.html | 2021-10-16T03:09:27 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.scipy.org |
Installing Java for PXF
PXF is a Java service. It requires a Java 8 or Java 11 installation on each Greenplum Database host.>. | https://gpdb.docs.pivotal.io/pxf/5-16/using/install_java.html | 2021-10-16T03:43:41 | CC-MAIN-2021-43 | 1634323583408.93 | [] | gpdb.docs.pivotal.io |
Factors to Consider When Choosing a Mobile Phone Repair Company
Mobile phone repairs are common since a majority of people have access to mobile phones website. There are a number of mobile phone repair companies that do mobile repair company and you can get your mobile repairs done here!. A lot of people who own mobile phone will be seeking to have mobile phone repairs here. You will find it important to look into a few guidelines when looking for a mobile phone repair company this website.
The primary factor to consider when looking for a mobile phone repair company is the reviews and recommendations now!. You will find it necessary to ask people around you if they know of a good mobile repair company you can use now. You will discover that you will be able to get a list of suggestions from your friends which will enable you to choose a good mobile phone repair company this site. You will also find the need to check various mobile phone repair sites to see listings from customers to be able to see and compare their different experiences with the mobile phone repair companies read more. You will discover that reviews will be able to guide you to evaluate the quality of services of the mobile phone repair company read more here.
The subsequent factor you should look at is the cost of the mobile phone repair company read more now. You will see it important to do a research on the cost of different mobile phone repair company on a particular repair that you need check it out!. You will discover that it is a good way to gauge the cost of the mobile phone repair view here!. You will see that looking at mobile phone repair company will enable you to choose one within your budget view here.
The third tip you should consider is the location of the mobile phone repair company learn more. You will discover it is key to consider the placement of the mobile phone repair company learn. You will realize that you will need to select a mobile phone repair company that is close to you as will make it easier for you to have your repairs done click for more.
The last factor to consider when looking for a mobile repair company is the license of registration more. You will discover the need to look at the license of operation of the mobile phone repair company so as to avoid fraudsters click. You will realize that by looking at license of operation will enable you to get a registered mobile phone repair companyclick here.
The above are factors you should consider when selecting a mobile phone repair company this. | http://docs-prints.com/2021/01/27/a-simple-plan-30/ | 2021-10-16T02:28:09 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs-prints.com |
Views, Indexing, and Index Service
Views and indexes support querying data in Couchbase Server.
Querying of Couchbase data is accomplished via the following:
MapReduce views accessed via the View API.
Spatial views accessed via the Spatial View API.
N1QL queries with Global Secondary Indexes (GSI) and MapReduce views.
Indexing Architecture and Indexers
Couchbase Server provides a number of indexes that can accelerate access for different types of queries.
- MapReduce view indexer
This incremental view indexer takes in a user defined map and reduce function and incrementally processes each mutation on the bucket and calculates a view.
Views are typically useful for interactive reporting type queries where complex data processing and custom data reshaping is necessary. Incremental MapReduce views are part of the data service and are partitioned across the nodes in the same way as the data. This means the view indexer always processes mutations from the local vBuckets.
You can query incremental MapReduce views using the view API. N1QL also allows access to MapReduce views, but only allows a static MapReduce function that cannot be altered.
For more information about MapReduce views, see Incremental MapReduce Views.
- Spatial view indexer
Spatial view indexer takes in a map function that supports processing geographic information and allows multidimensional bounding box queries for location aware applications.
Much like the MapReduce views, spatial views incrementally process each mutation on the bucket and calculate a spatial view. Spatial views are part of the data service and are partitioned across the nodes in the same way as the data. This means the spatial view indexer always processes mutations from the local vBuckets.
You can query spatial views using the spatial view API.
For more information, see Spatial Views.
- GSI indexer
The indexer for Global Secondary Indexes (GSIs) is similar to the B+tree indexers widely used in relational databases. GSIs index document attributes and N1QL-based expressions to provide a faster lookup with N1QL queries. GSIs are purpose-built for N1QL queries and can only be utilized through N1QL queries.
GSIs are a part of the index service and are independent of the data service. They are partitioned across the nodes independent of the data. This means the indexer for GSI typically does not always process mutations from the local vBuckets.
For more information, see Global Secondary Indexes (GSIs).
For a comparison between MapReduce views and Global Secondary Indexes, see Global Secondary Indexes Versus Views. | https://docs.couchbase.com/server/5.1/architecture/views-indexing-index-service.html | 2021-10-16T03:38:07 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.couchbase.com |
This is an iframe, to view it upgrade your browser or enable iframe display.
Prev
5.4.9.2. Add FCoE SAN
The following procedure explains how to add
Fibre Channel over Ethernet
(FCoE) storage devices and make them available during the installation:
Procedure 5.2. Add FCoE Target
Click the
Add FCoE SAN
button in the bottom right corner of
Section 5.4.9, “Installation Destination - Specialized & Network Disks”
. A new dialog window will open.
Select the network interface (
NIC
) which is connected to your FCoE switch from the drop-down menu. Note that this network interface must be configured and connected - see
Section 5.4.12, “Network & Hostname”
.
Below the
NIC
drop Devices
tab in
Section 5.4.9, “Installation Destination - Specialized & Network Disks”
.
Prev
5.4.9. Installation Destination - Specialized & N...
Up
5.4.10. Manual Partitioning | https://docs.fedoraproject.org/en-US/Fedora/html/Installation_Guide/sect-installation-gui-installation-destination-add-fcoe.html | 2021-10-16T03:58:10 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.fedoraproject.org |
cache
Stardog has the notion of a distributed cache. A set of cached datasets can be run in conjunction with a Stardog server or cluster to boost performance. See the Cache Management section for more information.
The table below contains the CLI commands to administer caches in Stardog. Select any of the commands to view their manual page. | https://docs.stardog.com/stardog-admin-cli-reference/cache/ | 2021-10-16T01:40:58 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.stardog.com |
Transfer and Store the Data
Transfer and Store the Data
Use one of the following approaches to transform the data with gpfdist.
- GPLOAD supports only input transformations, but is easier to implement in many cases.
- INSERT INTO SELECT FROM supports both input and output transformations, but exposes more details. | https://greenplum.docs.pivotal.io/43330/admin_guide/load/topics/g-transfer-and-store-the-data.html | 2021-10-16T03:19:07 | CC-MAIN-2021-43 | 1634323583408.93 | [] | greenplum.docs.pivotal.io |
Interchange 5.12 Administrator Guide Save PDF Selected topic Selected topic and subtopics All content Interchange Monitoring Framework The Interchange Monitoring Framework is a statistical collection program for monitoring trading engine activity. Monitoring Framework is based on the open source Metrics library (metrics.codahale.com). Monitoring Framework runs natively on the trading engine to continuously collect a range of operations statistics. You can configure Monitoring Framework to send the collected data to a third-party application for viewing and analysis. You configure monitoring dynamically so that you can enable statistics reporting at any time without restarting Interchange. Exporting statistics to visualization software In Metrics open source terminology, Reporters are the way the Monitoring Framework exports all the collected metrics data. You can configure Monitoring Framework reporters to send the statistics it collects to any of the following third-party and open source metrics reporter applications: Apache Log4j – Generates statistic log files CSV – Gathers detailed CSV-formatted data on specific metrics. JMX – Enables the availability of metrics to all JMX agents (including Java VisualVM). Graphite – Open-source monitoring framework Timers A timer is a set of data that measures the duration of a type of event and the rate of its occurrence. Timer data is embedded in the data of the related Filter/Name for a given Reporter. When you configure the reporter and select the filters for sending data to that reporter you can include or exclude timer data. Monitoring Framework provides timers that enable Interchange to collect measurements of variety of values. The following example shows a Log4j output with timer data on how long it takes to update a database heartbeat: 2013-12-18 11:47:25,245 - Cluster-HeartbeatUpdater.database.updateTime.item-a13200_cn: Count: 44, Min: 0.9758089999999999, Max: 107.144837, Mean: 14.325860363636362, StdDev: 25.427175037853704, Median: 2.1554425, p75: 17.839615, p95: 91.66807949999999, p98: 107.144837, p99: 107.144837, p999: 107.144837, MeanRate: 0.499742453620688, M1: 0.5294262846388921, M5: 0.5767878890393485, M15: 0.5915183485820346, RateUnit: events/second, DurationUnit: milliseconds Monitoring Framework provides timers to gather statistics on the following actions: Event purge Message purge Consumption time (by pickup) Production sending (by delivery) File system health monitor Cluster Database Heartbeat Sending events to Sentinel Configure Monitoring Framework Use the following procedure to configure how Monitoring Framework sends statistical data to one or more reporter target applications. Go to <install directory>/conf and open the file monitoringconfig.xml in a text editor. Use the tables that follow this procedure to specify the target reporter applications and the statistics to be sent. Save the file. Note: You do not have to restart Interchange, statistics monitoring changes take effect immediately. Reporter attributes To control the amount and type of statistics that you send to the reporter application, set the following attributes: Parameter Description Enabled Set to true/false to enable or disable. rateUnit TimeUnit (Seconds, MS, Minutes) to covert values to. Example - events/second or events/minute. durationUnit TimeUnit (Seconds, MS, Minutes) to measured time periods. writeInterval Number of seconds to send metrics to reporter. Example 5 – Statistics will be sent every 5 seconds. Filter Use filters to control which statistics to send to the reporter. See the following table for a list and description of available filters. Filter names To set the filter attributes, use the following names in the Filter/Name field: Filter name field value Statistic description AlertSystem Alert system activity ClusterConnections Cluster framework individual connections between cluster nodes Cluster-HeartbeatUpdater Cluster framework heartbeat to database Cluster-ThreadPool Cluster framework thread pool ConnectionCache Sync connections being held by the trading engine Consumption Consumption on a per pickup basis (timers, per minute/per hour/per day stats available) DatabaseCommit Database commit notifications (local and remote) DatabaseConnectionPool BoneCP data base connection pool Events Event system activity FileSystemHealth Timer activity on monitoring each configured file system health check JPADataCache OpenJPA data cache activity jvm.gc, jvm.memory, jvm.thread-states Internal jvm heap usage, gc collections and thread statesMessageProduction MessageDispatched Messages dispatched to the trading engine for processing MessageProduction Production system coordination, and timers on producing messages on a per partner delivery MessagePurge Message purge activity (non Oracle stored procedures) ResourceManager Trading engine resource manager (used for X25 type protocols) Sentinel Sentinel connection and queue SequenceCoordinator Sequence coordination activity ThreadPool Core trading engine thread pool activity TokenManager Cluster framework TokenManager for cluster wide locks XmlDomDocumentFactory XML Dom Cache activity Example Monitoring Framework configuration <StatsCollectionConfiguration> <Metrics> <MonitorJVM>true</MonitorJVM> <Reporters> <Reporter name="JMX" enabled="true" rateUnit="SECONDS" durationUnit="MILLISECONDS"> </Reporter> <Reporter name="CSV" enabled="false" rateUnit="SECONDS" durationUnit="MILLISECONDS" writeInterval="5" path="../logs/metrics"> </Reporter> <Reporter name="GRAPHITE" enabled="true" rateUnit="SECONDS" durationUnit="MILLISECONDS" writeInterval="5" host=“graphite.lab.phx.axway.int" port="2003"> </Reporter> <Reporter name="LOG4J" enabled="true" rateUnit="SECONDS" durationUnit="MILLISECONDS" writeInterval="60"> <Filter type="StartsWith"> <Pattern name="ConnectionCache"/> <Pattern name="Consumption"/> <Pattern name="MessageDispatched"/> <Pattern name="MessageProduction"/> </Filter> </Reporter> </Reporters> </Metrics> </StatsCollectionConfiguration> Related Links | https://docs.axway.com/bundle/Interchange_512_AdministratorsGuide_allOS_en_HTML5/page/Content/Interchange_512/monitoring/mntrngFrmwk.htm | 2021-10-16T02:11:46 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.axway.com |
Editing a Host Template
You can edit the name of a host template, in addition to any of the role group selections.
Minimum Required Role: Cluster Administrator (also provided by Full Administrator) This feature is not available when using Cloudera Manager to manage Data Hub clusters.
- Click .
- Pull down the Actions menu for the template you want to modify, and click Edit.The Edit Host Template window appears. This page is identical to the Create New Host Template page. You can modify the template name or any of the role group selections.
- Click OK when you have finished. | https://docs.cloudera.com/cdp-private-cloud-base/7.1.4/managing-clusters/topics/cm-editing-host-template.html | 2021-10-16T02:12:56 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.cloudera.com |
What is a DBF file?
The file with .dbf extension is a database file used by a database management system application called dBASE. Inititally, the dBASE database was named as Project Vulcan; started by Wayne Ratliff in 1978. The DBF file type was introduced with dBASE II in 1983. It arranges multiple data records with Array type fields. The xBase database software which is pupular beacuse of its compatibility with a wide range of file formats; also supports the DBF files.
DBF File Format
The DBF file format belongs to the dBASE database management system but it may be compatible with xBase or other DBMS softwares. The initial version of dbf file consisted of a simple table that could have data added, modified, deleted, or printed using the ASCII characters set. With the passage of time .dbf was improved, and additional files were added to increase the features and capabilities of the database system.
In modern dBASE a DBF file consists of a header, the data records, and the EOF (End of File) marker
- The header contains information about the file, such as the number of records and the number of types of fields used in the records.
- The records contain the actual data.
- The end of the file is marked by a single byte, with value 0x1A.
File header
Layout of file header in dBase is given in following table:
- The ISMARKEDO function checks this flag (BEGIN TRANSACTION sets it to 1, END TRANSACTION and ROLLBACK reset it to 0).
- If this flag is set to 1, the message Database encrypted appears.
- The maximum number of fields is 255.
- n means the last byte in the field descriptor array.
Field descriptor array
Layout of field descriptors in dBASE:
Database records
Each record starts with a deletion (1-byte) flag. Fields are wrapped into records without field separators. All field data is ASCII. Depending on the field’s type, the application imposes further restrictions. Here are field types in dBase: | https://docs.fileformat.com/database/dbf/ | 2021-10-16T03:11:41 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.fileformat.com |
What is a SAV file?
SAV file is a data file created by the Statistical Package for the Social Sciences(SPSS), which is an application widely used by market researchers, health researchers, survey companies, government, education researchers, marketing organizations, data miners for statistical analysis. The SAV saved in a proprietary binary format and consists of a dataset as well as a dictionary that represent the dataset, saves data in rows and columns.
SAV File Format
The SAV file format has become relatively stable, but we can’t say it static. Backwards and forwards compatibility is optionally available where necessary, but not maintained properly. The data in an SAV file is categorized into following sections:
File header
It consists of 176 bytes. The first 4 bytes indicate the string $FL2 or $FL3 in the character encoding used for the file. The last three bytes represent that the data in the file is compressed using ZLIB. The next 60-byte string begins @(#) SPSS DATA FILE and also determines the operating system and SPSS version that created the file. The header then continues with six digit fields, containing the number of variables per observation and a digit code for compression, and ends with character data indicating creation date and time and a file label.
Variable descriptor records
The record contains a fixed sequence of fields, classifing the type and name of the variable together with formatting information used by SPSS. Each variable record may optionally contain a variable label of up to 120 characters and up to three missing-value specifications.
Value labels
The value labels are optional and stored in pairs of records with integer tags 3 and 4. The first record which is tag 3 has a sequence of pairs of fields, each pair containing a value and the associated value label. The second record which is tag 4, represents which variables the set of values/labels applies to.
Documents
Single or multiple records with integer tag 6. Optional documentation. contains 80-character lines.
Extension records
Single or multiple records with integer tag 7. Extension records provide information that can be ignored safely, but preserved, in many situations, enables for files written by newer software to preserve backward compatibility. Extension records have integer subtype tags.
Dictionary terminator
Only record with integer tag 999. It separates dictionary from data observations.
Data observations
It is considered as data is in observation order, e.g. all variable values for the first observation, followed by all values for the second observation, etc. The format of the data record varies depending on the compression code in the file header record. The data portion of a .sav file can be uncompressed:
- code 0: compressed by bytecode
- code 1: compressed using ZLIB compression | https://docs.fileformat.com/database/sav/ | 2021-10-16T01:42:06 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.fileformat.com |
What is a DLL file?
A DLL file or Dynamic Link Library is a type of executable file. It is one of the most commonly found extension files on your device and is usually stored in the System32 folder on your Windows. The DLL extension file was developed by Microsoft and is popularly used by them. It has a high popularity user rating. The DLL works as a shelf that contains the drivers/procedures/functions/properties that are designed and applied for a program/application by the windows server. A single DLL file can also be shared amongst various windows programs. These extension files are vital for the smooth running of Windows programs on your device as they are responsible for enabling and running various functions on the program such as writing and reading files, connecting with other devices that are external to your setup. These Files however can only be opened on a device that supports any version of Windows (windows 7/windows 10/etc.) and hence cannot be opened directly on a device that supports Mac OS. (If you want to open a DLL file on Mac OS, various external applications can help open them.)
DLL File Format
DLL file was developed by Microsoft and has the extension “.dll” that represents the type. It has been an integral part of the Windows 1.0 server, and beyond. It is a binary file type and supported by all versions of Microsoft Windows. This file type was created as means to create a shared library system within Windows programs, to allow for separate and independent edits or changes in the program libraries without the need to re-linking the programs.
DLL Example
Example code for a DLL entry point can be found; } | https://docs.fileformat.com/system/dll/ | 2021-10-16T02:09:47 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.fileformat.com |
VideoStreamTheora¶
Inherits: VideoStream < Resource < Reference < Object
VideoStream resource for Ogg Theora videos.
Description¶
VideoStream resource handling the Ogg Theora video format with
.ogv extension. The Theora codec is less efficient than VideoStreamWebm's VP8 and VP9, but it requires less CPU resources to decode. The Theora codec is decoded on the CPU.
Note: While Ogg Theora videos can also have an
.ogg extension, you will have to rename the extension to
.ogv to use those videos within Godot. | https://docs.godotengine.org/uk/stable/classes/class_videostreamtheora.html | 2021-10-16T02:11:16 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.godotengine.org |
Signal
GtkWidget::unmap
Description [src]
Emitted when
widget is going to be unmapped.
A widget is unmapped when either it or any of its parents up to the toplevel widget have been set as hidden.
As ::unmap indicates that a widget will not be shown any longer, it can be used to, for example, stop an animation on the widget. | https://docs.gtk.org/gtk4/signal.Widget.unmap.html | 2021-10-16T01:44:16 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.gtk.org |
Routing Optimization
User can select some locations on the map as waypoints to plot an optimized route. User can follow this route to visit the required locations and complete their meetings. User can also redirect themselves for turn-by-turn navigation in order to reach the right location on time.
Related Posts:
Quickly add a New Lead in Dynamics 365 CRM from Map screen – Generate leads on the go!!
New Send Route Email option within Dynamics 365 CRM to share waypoints with related customer information!
Plan Your Field Sales or Field Service Reps Travel Time Smartly Within Dynamics 365 CRM / PowerApps with Maplytics
New Truck Routing feature within Dynamics 365 CRM – Route Planning for Logistics and Distribution Companies gets smarter!
A quick way to find bookable resources and schedule work orders with optimized routes using Auto Scheduling within Dynamics 365 CRM
Create activities within Dynamics 365 CRM while on field using Maplytics
Search for data near your current GPS location within Dynamics 365 CRM
Get Turn-by-Turn navigation within Dynamics 365 CRM using Google Maps or Waze App while on field
Get perfect Routes – Automate Route Plotting on Map for from within Dynamics 365 CRM
Use Geofencing to track field productivity in Dynamics 365 CRM
Monitor Check-in and Check-Out of Field Sales & Service Professionals in Dynamics 365 CRM
Automate Route Planning for your Sales & Service field reps using Auto scheduling within Dynamics 365 CRM
Find Customers on the go using Along the Route Search feature in Dynamics 365 CRM
Sales & Field Service Reps now have a Better Control on their Route Planning within Dynamics 365 CRM
Search for Accounts or Leads in Dynamics 365 based on Travel Time
Save and Share Optimized Route
Route Optimization in Dynamics CRM
Saved Template Visualization on CRM Entity Form
Directions Card
Last modified
25d ago
Copy link | https://docs.maplytics.com/features/routing-optimization | 2021-10-16T03:10:19 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.maplytics.com |
HTTP Connector Security Update - Mule 3
Released: February 5, 2015
Version Impacted: Mule 3.6.0
Component Impacted: HTTP Connector
What is the issue?
The new HTTP connector released in Mule 3.6.0 when configured to act as a HTTPS client does not validate the certificate presented by the remote server prior to establishing the TLS/SSL connection.
What are the impacts of this issue?
Any certificate provided by the server is implicitly accepted and processing continues. This potentially exposes messages sent by the HTTP Connector to man-in-the-middle attacks or certain forms of impersonation attacks.
What are the mitigating factors against this vulnerability?
As Mule 3.6.0 was recently released (GA: January 17, 2015), it is unlikely customers have migrated to the new HTTP Connector. Mule applications leveraging the previously available HTTP Transport are not impacted.
What is being done about this?
MuleSoft has identified and resolved this vulnerability and immediately made a patch available for enterprise subscription customers. The patch can be downloaded from the customer portal. This fix has been incorporated into Mule 3.6.1 which MuleSoft released on March 4, 2015 and is available from the customer portal as well as from mulesoft.org and mulesoft.com.
If I am affected, what can I do in the interim?
Enterprise customers using the new HTTP Connector in Mule ESB 3.6.0 are strongly advised apply the patch immediately or upgrade to Mule ESB 3.6.1. Please reference the support knowledge base for more information on this process.
Community customers who do not have access to the patch are advised to use the old HTTP Transport or upgrade to Mule 3.6.1.
How does this issue impact my CloudHub deployment?
CloudHub users who had deployed applications to Mule 3.6.0 EE needed to restart those applications to take advantage of a patch applied to CloudHub on February 5, 2015. Now that Mule 3.6.1 EE is available in CloudHub, customers no longer have the option to deploy to Mule 3.6.0 EE. | https://docs.mulesoft.com/release-notes/connector/http-connector-security-update | 2021-10-16T03:34:43 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.mulesoft.com |
Security
This is by no means an exhaustive list of security measurements you should take to protect your website and data. Its already a good start if you take security seriously and that you orientate your self what can be done and how to keep security up to date.
A couple of suggestions which are quite important as a starting point.
Make sure you run the latest version of WordPress and all the plugins you use.
Check your PHP version. Right now the latest version is 7.4.
Use two-factor (2FA) or multi-factor authentication (MFA) for your hosting-account and WordPress admin account. Consider the usage of a password manager, e.g. Bitwarden, Lastpass, 1Password, Dashlane, …
Don’t use the username ‘admin’ as your WordPress admin account. Use a random string as username.
Does your hosting-provider offer automatic backups? If not, consider a plugin like UpdraftPlus – Backup/Restore.
Always use HTTPS. Most hostingprovider nowadays support Let’s Encrypt free certificates.
…
The next ones require in-depth knowledge of your hosting environment. Only use them if you know what you are doing!
HTTPS
Most hosting providers offer free ‘Let’s Encrypt’ certificates for HTTPS. Use them! Often you can configure them in your Directadmin or cPanel environment of your webserver.
But this is not enough. You have to make sure your webserver only serves HTTPS requests. In general you can include the following snippet in your
.htaccess file.
Add your domain to the Strict-Transport-Security preload list. Goto Only do this if you are absolutely sure. Adding your domain to the list is a simple step. Taking it off the list is painful and slow. Before you add your domain to the list, add the next snippet to your
.htaccess file.
Header set Strict-Transport-Security "max-age=63072000; includeSubdomains; preload"
Security headers
A few security headers you should consider adding to your
.htaccess file. Only do so if you know what you are doing.
Two Factor Authentication
Protect you WordPress admin login (and other logins as well) with the Two Factor plugin.
Disable File Editing
WordPress comes with a set of easy-to-reach theme and plugin editors. You can find them under Appearance > Theme Editor and Plugins > Plugin Editor. These allow direct access to your site’s code.
While these tools are useful to some, many WordPress users aren’t programmers and will never need to touch anything here.
WordPress has a constant to disable editing from Dashboard. Placing this line in
wp-config.php.
define('DISALLOW_FILE_EDIT', true);
Online security check
There are a few sites that offer online security checks. offers you a deep analysis of the configuration of your SSL web server. If everything is ok, your grade should be
A+. checks your HTTPS security headers you have configured in the
.htaccess file. Grades A and A+ are a good score.
There are many more. Google is your friend.
Realtime security plugins
WordPress is the most popular and widely used CMS platform on the Internet. Almost 1/3 of all websites globally use WordPress. As a result of this popularity, hackers and spammers have taken keen interest in breaking the security of WP-operated sites. Here is a list of some free and paid security plugins that can be used to keep your WordPress site secured:
Further reading
Have a look at. | https://docs.fast-events.eu/en/latest/advanced/security.html | 2021-10-16T02:25:14 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.fast-events.eu |
pfsync Overview¶
pfsync enables the synchronization of the firewall state table between cluster nodes. Changes to the state table on the primary are sent to the secondary firewall(s) over the Sync interface, and vice versa. When pfsync is active and properly configured, all nodes will have knowledge of each connection flowing through the cluster. If the master node fails, the backup node will take over and clients will not notice the transition since both nodes knew about the connection beforehand.
pfsync uses multicast by default, though an IP address can be defined to force unicast updates for environments with only two firewalls where multicast traffic will not function properly. Any active interface can be used for sending pfsync updates, however utilizing a dedicated interface is better for security and performance. pfsync does not support any method of authentication, so if anything other than a dedicated interface is used, it is possible for any user with local network access to insert states into the state table. In low throughput environments that aren’t security paranoid, use of the LAN interface for this purpose is acceptable. Bandwidth required for this state synchronization will vary significantly from one environment to another, but could be as high as 10% of the throughput traversing the firewall depending on the rate of state insertions and deletions in a network.
Failover can still operate without pfsync, but it will not be seamless. Without pfsync if a node fails and another takes over, user connections would be dropped. Users may immediately reconnect through the other node, but they would be disrupted during the transition. Depending on the usage in a particular environment, this may go unnoticed or it could be a significant, but brief, outage.
When pfsync is in use, pfsync settings must be enabled on all nodes participating in state synchronization, including secondary nodes, or it will not function properly.
pfsync and Firewall Rules¶
Traffic for pfsync must be explicitly passed on the Sync interface. The rule must pass the pfsync protocol from a source of the Sync network to any destination. A rule passing all traffic of any protocol would also allow the required traffic, but a more specific rule is more secure.
pfsync and Physical Interfaces¶
States in pfSense® are bound to specific operating system Interfaces. For example, if WAN is em0, then a state on WAN would be tied to em0. If the cluster nodes have identical hardware and interface assignments then this works as expected. In cases when different hardware is used, this can be a problem. If WAN on one node is em0 but on another node it is igb0, the states will not match and they will not be treated the same.
It is always preferable to have identical hardware, but in cases where this is impractical there is a workaround: Adding interfaces to a LAGG will abstract the actual underlying physical interface so in the above example, WAN would be lagg0 on both and states would be bound to lagg0, even though lagg0 on one node contains em0 and it contains igb0 on the other node.
pfsync and Upgrades¶
Normally pfSense would allow firewall upgrades without any network disruption. Unfortunately, this isn’t always the case with upgrades as the pfsync protocol can change to accommodate additional functionality. Always check the upgrade guide linked in all release announcements before upgrading to see if there are any special considerations for CARP users. | https://docs.netgate.com/pfsense/en/latest/highavailability/pfsync.html | 2021-10-16T03:38:42 | CC-MAIN-2021-43 | 1634323583408.93 | [] | docs.netgate.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.