content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
TOPICS×
Open a URI from a table
Open functionality is currently configured only in the Site application and only for opening URIs.
In Site, from a URI table, you can right-click a URI to display a web page in a web browser. To view a URI in its native format (such as HTML), Data Workbench must have access to the referenced location and the application needed to open that item. For example, to view a web page, Data Workbench must have access to the Internet as well as have a web browser installed.
- Right-click an element of the dimension and click Open URI .The URI opens in its native application. | https://docs.adobe.com/content/help/en/data-workbench/using/client/analysis-visualizations/tables/c-open-uri-tbls.html | 2020-05-25T15:42:20 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.adobe.com |
TOPICS×
Securely storing policies
Adobe Access SDK provides a great deal of flexibility in the development of applications for use in content packaging and policy creation. When creating such applications, you may want to allow some users to create and modify policies, and limit other users such that they can only apply existing policies to content. If this is the case, you must implement the necessary access controls to create user accounts with different privileges for policy creation and the application of policies to content.
Policies are not signed or otherwise protected from modification until they are used in packaging. If you are concerned about users of your packaging tools modifying policies, you should consider signing the policies to ensure that they cannot be modified.
For more information on creating applications using the SDK, see the Adobe Access API Reference . | https://docs.adobe.com/content/help/en/primetime/drm/adobe-access-secure-deployment-guidelines/packaging/pkging-and-protecting-content-secure-storing-policies.html | 2020-05-25T15:51:46 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.adobe.com |
filters using the Sensu web UI, access the federated Sensu web UI homepage, which you can filter by cluster and namespace, and create custom web UI configurations.
- Control permissions with Sensu role-based access control (RBAC), with the option of using Lightweight Directory Access Protocol (LDAP) and Active Directory (AD) for authentication.
- Use powerful filtering capabilities designed for large installations. With label and field selectors, you can filter Sensu API responses, sensuctl outputs, and Sensu web UI views using custom labels and a wider range of resource attributes. Plus, save, recall, and delete your filtered searches in the web UI.
- Automatically populate data for processes on the local agent with the
discover-processesagent configuration flag.
- Log event data to a file you can use as an input to your favorite data lake solution.
- Connect your monitoring event pipelines to industry-standard tools like ServiceNow and Jira with enterprise-tier: | https://docs.sensu.io/sensu-go/latest/getting-started/enterprise/ | 2020-05-25T13:09:29 | CC-MAIN-2020-24 | 1590347388758.12 | [array(['/images/go-license-download.png',
'Screenshot of Sensu account license download'], dtype=object)] | docs.sensu.io |
Documentation Updates for the Period Ending January 31, 2018
New docs
The following documents are new to the help desk support series as part of our regular documentation update efforts:
- Log streaming: LogDNA
Recently edited and reviewed docs
The following documents were edited by request, as part of the normal doc review cycle, or as a result of updates to how the Fastly web interface and API operate:
- Accounts and pricing plans
- Caching best practices
- Working with health checks
- Monitoring account activity with event logs
New and recently updated Japanese translations
We've recently added Japanese (日本語) translations for the following service guides:
- イメージオプティマイザーの設定
- イメージの配信
The following Japanese (日本語) translations were recently updated to reflect changes in their English counterparts:
- カスタムログ形式
Our documentation archive contains PDF snapshots of docs.fastly.com site content as of the above date. Previous updates can be found in the archive as well. | https://docs.fastly.com/changes/2018/01/31/changes | 2020-05-25T14:36:34 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.fastly.com |
How to Position Centered Slider Text with CSS
When using the Centered Text/Image layouts in the Layers Slider, the image will follow the content by default because the image follows the content in the HTML order. You can manipulate how the content gets positioned via the Advanced option in the design bar to set the content to use an absolute position. This takes a measure of trial and error depending on the size of your image and slider.
If you use your browser inspector to look at the CSS for the slide, you will see the copy-container has a position: relative you need to reset. Here is how to go about it:
Create the Class
- In your slider widget, ensure the Centered Image layout and Centered Text layout are selected in the slide toolbar.
- Click the Advanced button in the Design bar on the right
- Enter the custom class into the class field
- See below for the code snippets to use in the CSS box to gain the desired effect, then customize the values to fit your design.
See How to Use the Advanced Design Bar Option to Add Custom Classes to Widgets for a detailed tutorial of using the widget custom css option.
Placing the Image Above the Content
This allows us to change where the image sits and ensures it stays centered. Enter custom-slider into the class field:
To make the content sit below the image, you will need to give it some padding equal to the height of your image:
Overlaying Content on Image
All you need is absolute positioning and a higher z-index than the featured image. Enter overlay-slider into the class field:
The top and left here are assuming you are using the default Full Width layout and slider height. Experiment with the values to get your content positioned just right.
Did you know?
Our friends at Jetpack are doing some incredible work to improve the WordPress experience. Check out Jetpack and improve your site's security, speed and reliability.
| https://docs.layerswp.com/doc/how-to-position-centered-slider-text-with-css/ | 2020-05-25T15:46:03 | CC-MAIN-2020-24 | 1590347388758.12 | [array(['https://refer.wordpress.com/wp-content/uploads/2018/02/leaderboard-light.png',
'Jetpack Jetpack'], dtype=object) ] | docs.layerswp.com |
About XR Environment Probe Subsystem
The purpose of this subsystem is to provide an interface for managing and interacting with XR environment probes.
Environment probes are an XR technique of capturing real-world imagery from a camera and organizing that information into an environment texture, such as a cube map, that contains the view in all directions from a certain point in the scene. Rendering 3D objects using this environment texture allows for real-world imagery to be reflected in the rendered objects. The result is generally realistic reflections and lighting of virtual objects as influenced by the real-world views.
In an XR application, environment probe functionality provides valuable lighting and reflection data from the real-world views that are used by the renderer to enhance the appearance of the rendered objects allowing for the virtual scene to blend better with the real-world environment. The following image illustrates the use of the environment texture from an environment probe applied to a sphere as a reflection map.
Environment probes
An environment probe is a location in space at which environment texturing information is captured. Each environment probe has a scale, orientation, position, and bounding volume size. The scale, orientation, and position properties define the transformation of the environment probe relative to the AR session origin. The bounding size defines the volume around the environment probes position. An infinite bounding size indicates that the environment texture may be used for global lighting whereas a finite bounding size expresses that the environment texture captures the local lighting conditions in a specific area surrounding the environment probe.
Environment probes may be placed at locations in the real-world to capture the environment information at each probe location. The placement of environment probes occurs via two different mechanisms:
Manual placement
Environment probes are manually placed in the scene. To achieve the most accurate environment information for a specific virtual object, increasing the proximity of an environment probe to the location of the virutal object improves the quality of the rendered object. Thus, manually placing an environment probe in or near important virtual objects results in the most accurate environment information to be produced for that object.
Furthermore, if a virtual object is moving and the path of that movement is known, placing multiple environment probes along the movement path allows the rendering of that object to better reflect the motion of the virtual object through the real-world environment.
Automatic placement
Providing implementations may implement their own algorithms for choosing how and where to best place environment probes to achieve a good quality of environment information. Typically, the determination for automatic environment probe placement relies on key feature points that have been detected in the real-world environment. The methodology for making these automatic placement choices is completely in the control of the providing implementation.
Automatically placed environment probes provide a good overall set of environment information for the detected real-world features. However, manually placing environment probes at the locations of key virtual scene objects allow for improved environmental rendering quality of those important virtual objects.
Using XR Environment Probe Subsystem
This package defines only the abstract interface for interacting with environment probe functionality. To use this functionality in an application, you need to install a package that provides an implementation for the environment probe functionality.
This package adds new C# APIs for interacting with XR environment probe functionality. Refer to the Scripting API documentation for working with the environment probe interface. | https://docs.unity3d.com/Packages/[email protected]/manual/environment-probe-subsystem.html | 2020-05-25T14:40:52 | CC-MAIN-2020-24 | 1590347388758.12 | [array(['images/ar-environment-probe-reflection-example.png',
'Sphere with a reflection map from an environment probe'],
dtype=object) ] | docs.unity3d.com |
Removes all keys and values from the preferences. Use with caution.
Call this function in a script to delete all current settings in the PlayerPrefs. Any values or keys have previously been set up are then reset. Be careful when using this.
/Float("Health", 50.0F); PlayerPrefs.SetInt("Score", 20);"); } } }
/ { float m_Score; int m_Health; string m_PlayerName;
void Start() { //Fetch the PlayerPref settings SetText(); }
void SetText() { //Fetch the score, health and name from the PlayerPrefs (set these Playerprefs in another script) m_Score = PlayerPrefs.GetFloat("Health", 0); m_Health =(); } } } | https://docs.unity3d.com/kr/2017.1/ScriptReference/PlayerPrefs.DeleteAll.html | 2020-05-25T15:35:55 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.unity3d.com |
Assertions¶
An assertion classifies the clinical significance of a variant-disease associate under recognized guidelines.
This section of the documentation details the knowledge model that CIViC uses for representing assertions. Each page roughly corresponds to a field in the Assertion schema. Note that Assertions share several key fields with Evidence Items, listed on the Shared Fields page along with notes regarding differences in the meaning of those fields between the two entities. | https://civic.readthedocs.io/en/latest/model/assertions.html | 2020-05-25T13:44:16 | CC-MAIN-2020-24 | 1590347388758.12 | [] | civic.readthedocs.io |
Using Azure Resource Manager Templates to deploy a Corda Enterprise node
This document will explain how to deploy a Corda Enterprise node to the Azure cloud using the Azure Resource Manager templates via the Azure Marketplace.
Prerequisites
You will need a Microsoft Azure account which can create new resource groups and resources within that group.
Find Corda Enterprise on Azure Marketplace
Go to and search for
corda enterprise and select the
Corda Enterprise Single Node option:
Click on
GET IT NOW:
Click on
Continue to agree to the terms:
This will take you to the Azure Cloud Portal. Log in to the Portal if you are not already. It should redirect to the Corda Enterprise template automatically:
Click on
Create to enter the parameters for the deployment.
Enter the VM base name, an SSH public key or password to connect to the resources over SSH, an Azure region to host the deployment and create a new resource group to house the deployment. Click
OK.
Next select the virtual machine specification. The default here is suitable for Corda Enterprise so its fine to click
OK. Feel free to select a different specification of machine and storage if you have special requirements.
Next configure the Corda node settings. Currently the only version available with the template is the current release of Corda Enterprise. We may add more version options in the future.
Enter the city and country that you wish to be associated with your Corda node.
one-time-download-keyin order to set up the template. This will allow the template scripts to connect to, and provision the node to the Corda Testnet.
You can register with Testnet and obtain the
ONE-TIME-DOWNLOAD-KEY at or see the Testnet documentation:
The Corda Testnet.
Finally you can select your database sizing in the
Corda Data Tier Performance (the default is fine for typical usage).
Click
OK.
Wait for the validation checks to pass and check the settings. Click
OK.
Check the Terms of Use and if everything is OK click
Create. Azure will now run the template and start to provision the node to your chosen region. This could take some time.
You will be redirected to your
Dashbord where the deployment will appear if the deployment completes without errors.
You can now log in to your resource by selecting the virtual machine in the resource group and clicking on
Connect. Log in with SSH.
Testing the deployment
You can test the deployment by following the instructions in Using the Node Explorer to test a Corda Enterprise node on Corda Testnet. | https://docs.corda.net/docs/corda-enterprise/3.3/azure-template-guide.html | 2020-05-25T14:06:17 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.corda.net |
Stellar Examples
Stellar examples help to illustrate how you can use to Stellar statements to transform and enrich steaming data to identify suspicious behavior.
Let's consider a situation
where you have a message containing
field ip_src_addr and you want to
determine if the src address is one of a few subnet ranges. You also want to store the
information in a variable called
is_local:
is_local := IN_SUBNET( ip_src_addr, '192.168.0.0/16', '192.169.0.0/16')
Now, let's consider a situation where you want to determine if the top level domain of a
domain name, stored in a field called
domain, is from a specific set of
whitelisted TLDs:
is_government := DOMAIN_TO_TLD(domain) in [ 'mil', 'gov' ]
Let’s assume further that the data coming in is known to be spotty with possible spaces and a dot at the end periodically due to a known upstream data ingest mistake. You can do that with three Stellar statements, the first two sanitizing the domain field and the final statement performing the whitelist check:
sanitized_domain := TRIM(domain) sanitized_domain := if ENDS_WITH(sanitized_domain, '.') then CHOP(sanitized_domain) else sanitized_domain is_government := DOMAIN_TO_TLD( sanitized_domain ) in [ 'mil', 'gov' ]
Now, let’s consider a situation where you have a blacklist of known malicious domains.
You can use the CCP data importer to store this data in HBase under the enrichment type
malicious_domains. As data streams by, you will want to indicate
whether a domain is malicious or not. Further, as before, you still have some ingestion
cruft to adjust:
sanitized_domain := TRIM(domain) sanitized_domain := if ENDS_WITH(sanitized_domain, '.') then CHOP(sanitized_domain) else sanitized_domain in_blacklist := ENRICHMENT_EXISTS('malicious_domains', sanitized_domains, 'enrichments', 't') | https://docs.cloudera.com/ccp/2.0.0/stellar-quick-ref/topics/ccp-stellar_examples.html | 2020-05-25T15:04:39 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.cloudera.com |
Associated Projects¶
Application Bindings¶
OpenStack supported binding:
Unofficial libraries and bindings:
PHP
PHP-opencloud - Official Rackspace PHP bindings that should work for other Swift deployments too.
Ruby
swift_client - Small but powerful Ruby client to interact with OpenStack Swift
nightcrawler_swift - This Ruby gem teleports your assets to an OpenStack Swift bucket/container
swift storage - Simple OpenStack Swift storage client.
Java
libcloud - Apache Libcloud - a unified interface in Python for different clouds with OpenStack Swift support.
jclouds - Java library offering bindings for all OpenStack projects
java-openstack-swift - Java bindings for OpenStack Swift
javaswift - Collection of Java tools for Swift
Bash
.NET
openstacknetsdk.org - An OpenStack Cloud SDK for Microsoft .NET.
Go
Authentication¶
Command Line Access¶
External Integration¶
1space - Multi-cloud synchronization tool - supports Swift and S3 APIs
swift-metadata-sync - Propagate OpenStack Swift object metadata into Elasticsearch
Monitoring & Statistics¶
Swift Informant - Swift proxy Middleware to send events to a statsd instance.
Swift Inspector - Swift middleware to relay information about a request back to the client.
Alternative API¶
Benchmarking/Load Generators¶
Custom Logger Hooks¶
swift-sentry - Sentry exception reporting for Swift
Storage Backends (DiskFile API implementations)¶
Swift-on-File - Enables objects created using Swift API to be accessed as files on a POSIX filesystem and vice versa.
swift-scality-backend - Scality sproxyd object server implementation for Swift.
Developer Tools¶
SAIO bash scripts - Well commented simple bash scripts for Swift all in one setup.
vagrant-swift-all-in-one - Quickly setup a standard development environment using Vagrant and Chef cookbooks in an Ubuntu virtual machine.
SAIO Ansible playbook - Quickly setup a standard development environment using Vagrant and Ansible in a Fedora virtual machine (with built-in Swift-on-File support).
runway - Runway sets up a swift-all-in-one (SAIO) dev environment in an lxc container.
Multi Swift - Bash scripts to spin up multiple Swift clusters sharing the same hardware
Other¶
Glance - Provides services for discovering, registering, and retrieving virtual machine images (for OpenStack Compute [Nova], for example).
Django Swiftbrowser - Simple Django web app to access OpenStack Swift.
Swift-account-stats - Swift-account-stats is a tool to report statistics on Swift usage at tenant and global levels.
PyECLib - High-level erasure code library used by Swift
liberasurecode - Low-level erasure code library used by PyECLib
Swift Browser - JavaScript interface for Swift
swift-ui - OpenStack Swift web browser
swiftbackmeup - Utility that allows one to create backups and upload them to OpenStack Swift
s3compat - S3 API compatibility checker | https://docs.openstack.org/swift/latest/associated_projects.html | 2020-05-25T14:48:51 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.openstack.org |
The Heightfield command creates a NURBS surface or mesh based on grayscale values of the pixels in an image file.
Heightfield options
The image's "height" is sampled at the specified number of control points along the u and v directions of the image.
Sets the scale of the height of the object.
Uses the image as a render texture for the created object.
Evaluates the color of the texture at each texture coordinate (u,v) and sets the vertex color to match.
See: ComputeVertexColors.
Creates a mesh with vertex points at each of the sample locations.
Creates a surface with control points at each of the sample locations.
Creates a surface that passes through each sample location's height.
Create surfaces
Show Z-buffer bitmap
Rhinoceros 6 © 2010-2020 Robert McNeel & Associates. 12-Sep-2020 | https://docs.mcneel.com/rhino/6/help/en-us/commands/heightfield.htm | 2020-09-18T10:32:22 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.mcneel.com |
URL Routing Wingtip Toys sample application to support URL routing. Routing enables your web application to use URLs that are friendly, easier to remember, and better supported by search engines. This tutorial builds on the previous tutorial "Membership and Administration" and is part of the Wingtip Toys tutorial series.
What you'll learn:
- How to register routes for an ASP.NET Web Forms application.
- How to add routes to a web page.
- How to select data from a database to support).
By default, the Web Forms template includes ASP.NET Friendly URLs. Much of the basic routing work will be implemented by using Friendly URLs. However, in this tutorial you will add customized routing capabilities.
Before customizing URL routing, the Wingtip Toys sample application can link to a product using the following URL:
By customizing..
URL Patterns
A URL pattern can contain literal values and variable placeholders (referred to as URL parameters). The literals and placeholders are located in segments of the URL which are delimited by the slash (
/) character. the placeholders.
Mapping and Registering Routes
Before you can include routes to pages of the Wingtip Toys sample application, you must register the routes when the application starts. To register the routes, you will modify the
Application_Start event handler.
In Solution Explorerof custom role and user. RoleActions roleActions = new RoleActions(); roleActions.AddUserAndRole(); // Add Routes. RegisterCustomRoutes(RouteTable.Routes); } void RegisterCustomRoutes(RouteCollection routes) { routes.MapPageRoute( "ProductsByCategoryRoute", "Category/{categoryName}", "~/ProductList.aspx" ); routes.MapPageRoute( "ProductByNameRoute", "Product/{productName}", "~/ProductDetails.aspx" ); } } }
When the Wingtip Toys sample application starts, it calls the
Application_Start event handler. At the end of this event handler, the
RegisterCustomRoutes method is called. The
RegisterCustomRoutes method adds each route by calling the
MapPageRoute method of the
RouteCollection object. Routes are defined using a route name, a route URL and a physical URL.
The first parameter ("
ProductsByCategoryRoute") is the route name. It is used to call the route when it is needed. The second parameter ("
Category/{categoryName}") defines the friendly replacement URL that can be dynamic based on code. You use this route when you are populating a data control with links that are generated based on data. A route is shown as follows:
routes.MapPageRoute( "ProductsByCategoryRoute", "Category/{categoryName}", "~/ProductList.aspx" );
The second parameter of the route includes a dynamic value specified by braces (
{ }). In this case, the
categoryName is a variable that will be used to determine the proper routing path.
Note
Optional
You might find it easier to manage your code by moving the
RegisterCustomRoutes method to a separate class. In the Logic folder, create a separate
RouteActions class. Move the above
RegisterCustomRoutes method from the Global.asax.cs file into the new
RoutesActions class. Use the
RoleActions class and the
createAdmin method as an example of how to call the
RegisterCustomRoutes method from the Global.asax.cs file.
You may also have noticed the
RegisterRoutes method call using the
RouteConfig object at the beginning of the
Application_Start event handler. This call is made to implement default routing. It was included as default code when you created the application using Visual Studio's Web Forms template.
Retrieving and Using Route Data
As mentioned above, routes can be defined. The code that you added to the
Application_Start event handler in the Global.asax.cs file loads the definable routes.
Setting Routeselement of the ProductList.aspx page with the updates highlighted in yellow, so the markup appears as follows:
<ItemTemplate> <td runat="server"> <table> <tr> <td> <a href="<%#: GetRouteUrl("ProductByNameRoute", new {productName = Item.ProductName}) %>"> <image src='/Catalog/Images/Thumbs/<%#:Item.ImagePath%>' width="100" height="75" border="1" /> </a> </td> </tr> <tr> <td> <a href="<%#: GetRouteUrl("ProductByNameRoute", new {productName = Item.ProductName}) %>"> <%#:Item.ProductName%> </a> <br /> <span> <b>Price: </b><%#:String.Format("{0:c}", Item.UnitPrice)%> </span> <br /> <a href="/AddToCart.aspx?productID=<%#:Item.ProductID %>"> <span class="ProductListItem"> <b>Add To Cart<b> </span> </a> </td> </tr> <tr> <td> </td> </tr> </table> </p> </td> </ItemTemplate>
Open the code-behind of ProductList.aspx.cs and add the following namespace as highlighted in yellow:
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using WingtipToys.Models; using System.Web.ModelBinding; using System.Web.Routing;
Replace the
GetProductsmethodmethod routes for categories and products. You have learned how routes can be integrated with data controls that use model binding. In the next tutorial, you will implement global error handling.
Additional Resources
ASP.NET Friendly URLs
Deploy a Secure ASP.NET Web Forms App with Membership, OAuth, and SQL Database to Azure App Service
Microsoft Azure - Free Trial | https://docs.microsoft.com/en-us/aspnet/web-forms/overview/getting-started/getting-started-with-aspnet-45-web-forms/url-routing | 2020-09-18T12:02:27 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.microsoft.com |
$ oc policy add-role-to-user \ system:image-puller system:serviceaccount:project-a:default \ --namespace=project. | https://docs.openshift.com/container-platform/4.2/openshift_images/managing_images/using-image-pull-secrets.html | 2020-09-18T10:51:42 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.openshift.com |
Advanced Data Scrubbing
In addition to using
beforeSend in your SDK or our regular server-side data scrubbing features to redact sensitive data, Advanced Data Scrubbing is an alternative way to redact sensitive information just before it is saved in Sentry. It allows you to:
- Define custom regular expressions to match on sensitive data
- Detailed tuning on which parts of an event to scrub
- Partial removal or hashing of sensitive data instead of deletion
A Basic Example
Go to your project- or organization-settings and click Security and Privacy in the sidebar. Scrolling down, you will find a new section Advanced Data Scrubbing.
- Click on Add Rule. You will be presented with a new dialog.
- Select Mask as Method.
- Select Credit card numbers as Data Type.
- Enter
$stringas Source.
As soon as you hit Save, we will attempt to find all creditcard numbers in your events going forward, and replace them with a series of
******.
For a more verbose tutorial check out this blogpost.
Rules generally consist of three parts:
Methods
- Remove: Remove the entire field. We may choose to either set it to
null, remove it entirely, or replace it with an empty string depending on technical constraints.
- Mask: Replace all characters with
*.
- Hash: Replace the matched substring with a hashed value.
- Replace: Replace the matched substring with a constant placeholder value (defaulting to
[Filtered]).
Data Types
Regex Matches: Custom regular expression. For example:
[a-zA-Z0-9]+. Some notes:
- Do not write
/[a-zA-Z0-9]+/g, as that will search for a literal
/and
/g.
- For case-insensitivity, prefix your regex with
(?i).
- If you're trying to use one of the popular regex "IDEs" like regex101.com, Golang is usually closest to how Sentry understands your regex.
Credit Card Numbers: Any substrings that look like credit card numbers.
Password Fields: Any substrings that look like they may contain passwords. Any string that mentions passwords, auth tokens or credentials, any variable that is called
passwordor
auth.
IP Addresses: Any substrings that look like valid IPv4 or IPv6 addresses.
IMEI Numbers: Any substrings that look like an IMEI or IMEISV.
UUIDs
PEM Keys: Any substrings that look like the content of a PEM-keyfile.
Auth in URLs: Usernames and passwords in URLs like.
US social security numbers: 9-digit social security numbers for the USA.
Usernames in filepaths: For example
myuserin
/Users/myuser/file.txt,
C:/Users/myuser/file.txt,
C:/Documents and Settings/myuser/file.txt,
/home/myuser/file.txt, ...
MAC Addresses
Anything: Matches any value. This is useful if you want to remove a certain JSON key by path using Sources regardless of the value.
Sentry does not know what your code does
Sources
Selectors allow you to restrict rules to certain parts of the event. This is useful to unconditionally remove certain data by event attribute, and can also be used to conservatively test rules on real data. A few examples:
**to scrub everything
$error.valueto scrub in the exception message
$messageto scrub the event-level log message
extra.'My Value'to scrub the key
My Valuein "Additional Data"
extra.**to scrub everything in "Additional Data"
$http.headers.x-custom-tokento scrub the request header
X-Custom-Token
$user.ip_addressto scrub the user's IP address
$frame.vars.footo scrub a stack trace frame variable called
foo
contexts.device.timezoneto scrub a key from the Device context
tags.server_nameto scrub the tag
server_name
All key names are treated case-insensitively.
Using an event ID to auto-complete sources
Above the Source input field you will find another input field for an event ID. Providing a value there allows for better auto-completion of arbitrary Additional Data fields and variable names.
The event ID is purely optional and the value is not saved as part of your settings. Data scrubbing settings always apply to all new events within a project/organization (going forward).
Advanced source names" }, "exception": { .values.*.value] [Remove] [Anything] from [logentry.formatted]
Boolean Logic
You can combine sources using boolean logic.
- Prefix with
!to invert the source.: Matches any string value
$number: Matches any integer or float value
$datetime: Matches any field in the event that represents a timestamp
$array: Matches any JSON array value
$object: Matches any JSON object
Select known parts of the schema using the following:
$error: Matches a single exception instance. Alias for
exception.values.*
$stack: Matches a stack trace instance. Alias for
stacktrace || $error.stacktrace || $thread.stacktrace
$frame: Matches a frame in a stack trace. Alias for
$stacktrace.frames.*
$http: Matches the HTTP request context of an event. Alias for
request
$user: Matches the user context of an event. Alias for
user
$message: Matches the top-level log message. Alias for
$logentry.formatted
$logentry: Matches the
logentryattribute of an event. Alias for
logentry
$thread: Matches a single thread instance. Alias for
threads.values.*
$breadcrumb: Matches a single breadcrumb. Alias for
breadcrumbs.values.*
$span: Matches a trace span. Alias for
spans.*
$sdk: Matches the SDK context. Alias for
sdk
Escaping Special.
Known Limitations of server-side data scrubbing
The following limitations generally apply to all server-side data scrubbing, be it basic Safe Fields usage or Advanced Data Scrubbing.
Hashing, masking or replacing a JSON object, array or number (anything that is not a string) cannot be done in all circumstances as it would change the JSON type of the value and violate assumptions Sentry's internals make about the data schema. Data scrubbing will ignore the Method in those cases and always remove/replace with
nullas that is always safe.
Sentry's internals require that the event user's IP address must either be
nullor a valid IPv4/IPv6 address. If you're trying to hash, mask or replace IP addresses, data scrubbing will move the replacement value into the user ID (if one is not already set) in order to avoid breaking this requirement while still providing useful data for the Users count on an issue.
- Package:
- nuget:Sentry
- Version:
- 2.1.6
- Repository:
- | https://docs.sentry.io/platforms/dotnet/guides/wpf/data-management/advanced-datascrubbing/ | 2020-09-18T11:22:11 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.sentry.io |
Article sections
Reset Settings & Data
Warning! Use with caution because if you press the reset button you won’t be able to get back again to the current state.
- Plugin Settings – the button will erase all custom settings you have made in the plugin and return back the default like when you install for the first time. The button will not reset Social Followers Counter setup.
- Followers Settings – the button will erase all settings for the social followers counter. It will not affect other plugin settings.
- Analytics Data – the button will erase all information in internal analytics. The analytics is a stand-alone feature of the plugin that you can activate from the menu. The analytics data is stored and collected separately from the share counter. This button does not affect the share counter information.
- Internal Counters – the button will reset only internal counters. Internal are those counters that are assigned to buttons without a social counter API. The value increases with each button click and it is stored on each post.
- Counters Last Update – the button will remove the last update time from each post/page on site. This will cause and immediate share counter update for all posts/pages when they are opened.
- Short URL Cache & Image Cache – the button will clear the cached short URLs (if you are using short URLs) and the cached feature images for sharing. Plugin stores in a cache the featured images for faster work and eliminate the database lookup each time. The cache updates automatically when you save posts or pages. But if a problem appears you can use this button.
- All Counter Information – unlike the internal counter button, this button will remove all the counter information collected by the plugin. This includes the internal counters, counter last update time and also the cache official share counter values. Next time you open a post or page you will get a fully fresh share counter values.
Was this article helpful?
YesNo
Related Articles
Was this article helpful?
YesNo | https://docs.socialsharingplugin.com/knowledgebase/how-to-reset-plugin-settings-to-default/ | 2020-09-18T09:33:12 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.socialsharingplugin.com |
There are 10 courts available for basketball plays and formations. The NBA – Goal Top court is available to everyone and used (with the Whiteboard theme) in the free basketball play designer. The other 9 courts are available only to pro subscribers.
Each court type (Goal Top, Goal Bottom, and Full) has a version for NBA, NCAA, and High School court markings.
These screenshots show the Gymnasium theme but all basketball courts are available in the Gymnasium, Whiteboard, and Chalkboard themes as well as any custom themes that you create as a pro subscriber. | http://docs.playartpro.com/basketball-courts/ | 2020-09-18T10:38:25 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.playartpro.com |
HDFS Component Memory CPU Disk JournalNode 1 GB (default)Set this value using the Java Heap Size of JournalNode in Bytes HDFS configuration property. 1 core minimum 1 dedicated disk NameNode Minimum: 1 GB (for proof-of-concept deployments) Add an additional 1 GB for each additional 1,000,000 blocksSnapshots and encryption can increase the required heap memory. See Sizing NameNode Heap Memory. Set this value using the Java Heap Size of NameNode in Bytes HDFS configuration property. Minimum of 4 dedicated cores; more may be required for larger clusters Minimum of 2 dedicated disks for metadata 1 dedicated disk for log files (This disk may be shared with the operating system.) Maximum disks: 4 DataNode Minimum: 4 GB Maximum: 8 GB Increase the memory for higher replica counts or a higher number of blocks per DataNode. When increasing the memory, Cloudera recommends an additional 1 GB of memory for every 1 million replicas above 4 million on the DataNodes. For example, 5 million replicas require 5 GB of memory. Set this value using the Java Heap Size of DataNode in Bytes HDFS configuration property. Minimum: 4 cores. Add more cores for highly active clusters. Minimum: 4 Maximum: 24 The maximum acceptable size will vary depending upon how large average block size is. The DN’s scalability limits are mostly a function of the number of replicas per DN, not the overall number of bytes stored. That said, having ultra-dense DNs will affect recovery times in the event of machine or rack failure. Cloudera does not support exceeding 100 TB per data node. You could use 12 x 8 TB spindles or 24 x 4TB spindles. Cloudera does not support drives larger than 8 TB. Parent topic: Cloudera Runtime | https://docs.cloudera.com/cdp-private-cloud/latest/release-guide/topics/cdpdc-hdfs.html | 2020-09-18T11:22:49 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.cloudera.com |
Extracting a Users Body Features
Using the Fision Web SDK you can request access to the body features of your current user.
Adding the "Body Profile" buttonAdding the "Body Profile" button
To request body features of a user you can add the "Body Profile" button to your web page like this:
<script type="text/javascript"> document.addEventListener("DOMContentLoaded", function(event) { var API_KEY = 'your_api_key_here'; FisionSDK.initialize({ apiKey: API_KEY }).then(function (fisionSDK) { fisionSDK.bodyProfileButton({ parentElement: document.getElementById('button_container'), // Add additional properties if needed (see "Additional properties" below) onBodyProfileReceived: function (bodyFeatures) { console.log('Received bodyfeatures:', bodyFeatures); }, onSignOut: function () { console.log('The user has been signed out.'); }, }); }); }); </script>
The
bodyProfileButton-method will add the body profile button to the specified
parentElement.
Additional propertiesAdditional properties
label: Override the default label.
- Default: "What's my size"
- Type: string
profileAvailableLabel: Override the default profile available label.
- Default: "My Meepl Profile"
- Type: string
backgroundColor: Override the background color of the button.
- Default: "#fff2ec"
- Type: string
foregroundColor: Override the foreground color of the button.
- Default: "#05184f"
- Type: string
fixedWidth: Set a fixed width for the button.
- Default: it takes the maximum width of the label and the profileAvailableLabel
- Type: number
Button action: overlayButton action: overlay
- When the user clicks the button, an overlay will appear.
- If the user is not yet signed in they will be able to sign in to their meepl account or create a new meepl account.
- If the user has no body profile yet, they will be able to create one.
- The user will be asked to share their body features with your company.
ResultResult
When the body features are available the
onBodyProfileReceived-function is called with the bodyFeatures argument. The body features will be available in JSON as you can see here:
{ "gender": "female" // possible value: 'female' | 'male' | 'other', "bodyRepresentationURL": "", // link to the 3D model of the user "measurements": // object with measurement names as keys and their values (all values are numbers in cm (lengths) or degree (angles)) { "bodyHeight": 174, "innerLegLength": 68.85, "waistCircumference": 86.86, "armLength": 71.25, "waistbeltCircumference": 88.3, "chestCircumference": 91.7, "hipCircumference": 95.81, // ... } }
For a full list of available measurements and their definitions please contact [email protected].
Example: Custom Tailored Clothing ShopExample: Custom Tailored Clothing Shop
The following is a demo of how this integration will work.
Deprecated: Requesting Body FeaturesDeprecated: Requesting Body Features
To request body features of your user load the body profiler and request body features like this:
FisionSDK.initialize(options).then(function (sdk) { return sdk.loadBodyProfiler(); }).then(function (body_profiler) { body_profiler.getBodyFeatures() .then(function (bodyFeatures) { console.log('Received bodyfeatures', bodyFeatures); }) .catch(function (error) { console.error('An error happened while requesting the body features: ' + error); }); });
getBodyFeatures returns a Promise that resolves once the user has given your website access to their data (this might take a while). | https://docs.fision.cloud/docs/docs_measurements_1.html | 2020-09-18T11:30:00 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.fision.cloud |
Kirkbymoorside Town Council
Agenda Staffing Committee - 9th February 2016
Issued on 4th February 2016 for a meeting of the Staffing Committee to be held in the Moorside Room, 9 Church Street, Kirkbymoorside on 9th February 2016 at 10 12th March 2015
- To consider matters arising
- Public Session - to allow members of the public to make representations, ask questions and give evidence in respect of any items of business
- To appoint appraisers for the annual appraisal
- To agree to carry out the staffing appraisal immediately following the staffing committee meeting on Tuesday 9th February 2016
- To agree on the date of the next meeting.
Related Documents: | https://docs.kirkbymoorsidetowncouncil.gov.uk/doku.php/agendastaffing2016-02-09 | 2020-09-18T11:10:37 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.kirkbymoorsidetowncouncil.gov.uk |
AssignmentUpdate
The RadGantt AssignmentUpdate is fired when an assignment's collection is about to be updated through the provider.
AssignmentUpdate event handler receives two parameters:
sender is the RadGantt control instance.
e is an object of type AssignmentEventArgs. It provides access to the updated RadGantt assignments collection.
Example
<telerik:RadGantt</telerik:RadGantt>
protected void RadGantt1_AssignmentUpdate(object sender, Telerik.Web.UI.Gantt.AssignmentEventArgs e) { foreach (var item in e.Assignments) { //... } }
Protected Sub RadGantt1_AssignmentUpdate(sender As Object, e As Telerik.Web.UI.Gantt.AssignmentEventArgs) For Each item As var In e.Assignments '... Next End Sub
Subscribing to only one of the following events: AssignmentInsert, AssignmentUpdate, AssignmentDelete, will cause a postback to be triggered for the other two events, instead of a callback. | https://docs.telerik.com/devtools/aspnet-ajax/controls/gantt/server-side-programming/events/assignmentupdate | 2020-09-18T11:39:30 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.telerik.com |
Atrium Integrator permissions
To use the Atrium Integrator console, you must be assigned to one of the assigned BMC Remedy AR System permission roles:
- AI Admin Group — To give administrator rights, assign the AI Admin group to the user.
- AI User Group — To give AI user rights, assign the AI user group to the user.
For more information about BMC Remedy AR System users and groups, see Creating users, groups, and roles.
Depending on the role assigned to a user the icons in Atrium Integrator console are enabled.
Note
To use the Atrium Integrator Spoon, you must be assigned AR Admin (Base administrator) role.
The following table lists the permissions for AI Admin and AI User role.
Atrium Integrator roles and permissions
The correct names are "AI User Group" and "AI Admin Group".
What does the role "AR User (Base Administrator)" refer to?
Hi Jan,
Thank you for your feedback on the documentation.
We have made the changes to the document based on your comment.
For details about AR User (Base Administrator), please refer to
Regards,
Amol
Hi BMC Team
It seems that theese permissions are invalid for AI 8.1 as they NO longer exist.
Kindly update this section to avoid confusion.
Davinder Singh - Will update the information in this section after discussing with the SMEs. | https://docs.bmc.com/docs/ac81/atrium-integrator-permissions-230262780.html | 2020-09-18T11:51:15 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.bmc.com |
Jelastic & CS CorrespondenceThe table below displays dependencies between Cloud Scripting and the Jelastic Platform versions within the same hosting provider platform.
Note
The Jelastic Platform version can be checked either at your dashboard, or within the Jelastic Cloud Union page, depending on your hosting provider. The Cloud Scripting version can be chosen at the bottom of the page within the present documentation.
| https://docs.cloudscripting.com/1.7.4/jelastic-cs-correspondence/ | 2020-09-18T11:32:13 | CC-MAIN-2020-40 | 1600400187390.18 | [array(['../img/version.png', 'version'], dtype=object)] | docs.cloudscripting.com |
The processor is used to simulate the processing of flow items in a model. The process is simply modeled as a forced time delay. The total time is split between a setup time and a process time. The processor can process more than one flow item at a time. Processors may call for operators during their setup and/or processing times. When a processor breaks down, all of the flow items that it is processing will be delayed.
The processor is a fixed resource. It is also a super-class of the combiner and separator 3D objects. It continues to receive flow items until its maximum content is met. Each flow item that enters the processor goes through a setup time followed by a process time. After these two processes have finished, the flow item is released. If the maximum content value is greater than one, then flow items will be processed in parallel.
If the processor is set to use operators during its setup or process time, then at the beginning of the respective operation it will call the user-defined number of operators using the requestoperators command with the processor as the station, and the item as the involved object. This will cause the processor to be stopped until the operators have arrived.
Once all operators have arrived, the processor will resume its operation. Once the operation is finished, the processor will release the operators it called. If the processor is set to use the same operators for both setup and process time, then the processor won't release the operators until both the setup process times are finished.
For information on events, see the Event Listening page.
The processor uses the standard events that are common to all fixed resources. See Fixed Resources - Events for an explanation of these events.
The processor has the following additional events:
This event is fired when the process time has expired. When this event fires it will execute the on process finish trigger where you can execute custom logic using FlexScript or preconfigured pick options.
It has the following parameters:
This event is fired when the setup time has expired, right before the process time event fires. When this event fires it will execute the on setup finish trigger where you can execute custom logic using FlexScript or preconfigured pick options.
It has the following parameters:
The operator reference event will only fire if either the Use Operator(s) for Setup or Use Operator(s) for Process is checked. This event will fire after the item has entered the processor, before the setup or process time has begun. This event will evaluate the Pick Operator field.
If the processor has a setup and process time and the Use Setup Operator(s) for both Setup and Process on the processor's properties window is unchecked, the operator reference event will fire twice. The first event will fire right after the setup time event. The second event will fire after the process time event.
It has the following parameters:
The Pick Operator field should return a reference to a task executer or dispatcher object that will be used to process the item. The processor will dispatch a task sequence to the associated object which will call the task executer to the processor then utilize them until the setup and/or process time is complete.
This event fires after the setup is finished. It will evaluate the Process Time field.
It has the following parameters:
The Process Time field should return a number which is the processing time for the item.
This event fires after the item has entered the object. It will evaluate the Setup Time field.
It has the following parameters:
The Setup Time field should return a number which is the setup time for the item.
For statistical purposes, the processor will be in one of the following states at various points during a simulation run. The current state can be viewed by clicking on the object and then viewing the Statistics pane in Quick Properties.
The object is empty.
The object is in its modeller-defined setup time.
The object is in its modeller-defined process time.
The object has released its flow item(s), but downstream objects are not ready to receive them yet.
The object is waiting for an operator to arrive, either to repair a breakdown, or to operate on a batch.
The object has released a flow item and a downstream object is ready to receive it, but a transport object has not picked it up yet.
The processor uses the standard statistics that are common to all fixed resources. See Fixed Resources - Statistics for an explanation of these statistics.
The processor object has six tabs with various properties. The last four tabs are the standard tabs that are common to all fixed resources. For more information about the properties on those tabs, see:
The other two tabs are available for the processor, combiner, separator, and multiprocessor. For more information about the properties on those tabs, see: | https://docs.flexsim.com/en/19.2/Reference/3DObjects/FixedResources/Processor/Processor.html | 2020-09-18T10:15:37 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.flexsim.com |
Harness Search
Use Harness Manager's global search to rapidly access Harness Applications, Application components, deployment history, and Audit Trails.
To begin, click the Search button (or type
/).
How Search Works
The search terms you enter are matched against these Harness entities:
- Applications
- Services
- Environments
- Workflows
- Pipelines
- Deployments
The Search modal's left pane displays a section for each matching entity (Applications, Services, etc.):
Within the Deployments section at left, each link previews a particular deployment event at right. Click the right-hand links to access the deployment's details page, or its Harness entities (Application, Workflow, etc.):
Selecting Applications or Pipelines at left can preview links to multiple related entities and Audit Trail events. Click Show All at right to access even large numbers of linked results—in this example, 82 Audits:
Search Logic
The search terms you enter are matched against Harness entities' Name and Description fields. You can improve entities' searchability by naming them carefully, and by adding relevant keywords to their Description fields.
Harness search supports substring matching: Typing in a portion of a longer Name or Description will retrieve a matching entity.
Harness does not support fuzzy search logic. It will not display results or suggestions for search terms that are misspelled, or that do not exactly match strings or substrings within Name and Description fields.
To search on multiple terms, separate them with spaces. Each term that you add makes your search more restrictive: For a match, all terms must be present in the Name or Description field of the same entity.
Search History
Harness retains your search history during a single logged-in Harness session, as follows:
- Closing and reopening the search modal restores your previous search terms. (Click Clear to override this.)
- Click Recent Searches to open a stack of (up to) your five most recent searches. Harness maintains this stack only per session. Once you sign out and sign back in, your search history is cleared. | https://docs.harness.io/article/6b0lc69ka9-harness-search | 2020-09-18T11:39:47 | CC-MAIN-2020-40 | 1600400187390.18 | [array(['https://files.helpdocs.io/kw8ldg1itf/articles/6b0lc69ka9/1575584043106/search-demo.gif',
None], dtype=object)
array(['https://files.helpdocs.io/kw8ldg1itf/articles/6b0lc69ka9/1574912459301/image.png',
None], dtype=object)
array(['https://files.helpdocs.io/kw8ldg1itf/articles/6b0lc69ka9/1575613826250/image.png',
None], dtype=object)
array(['https://files.helpdocs.io/kw8ldg1itf/articles/6b0lc69ka9/1574921691980/image.png',
None], dtype=object)
array(['https://files.helpdocs.io/kw8ldg1itf/articles/6b0lc69ka9/1574922632117/image.png',
None], dtype=object) ] | docs.harness.io |
In this example, we send a follow-up push notification to users who added items to their shopping cart but did complete their purchase within one day.
- Go to the Messaging tab of the dashboard. Click Create Message and name the message. For example: “Shopping cart abandonment push.”
- Choose Push Notification as the message type.
- Under Targets, select “All Users.”
- For Delivery, select “Triggered.” Specify the trigger to be “User triggers event.” Define the event that will trigger the message (for example, “AddToCart”).
- Set a 1 day Delay at optimal time.
- Add an exclusion by selecting “Unless user triggers event” with the event as “Checkout.”
- Customize the message text.
- Click Start in the top right corner to set the message live.
Updated 2 years ago | https://docs.leanplum.com/docs/how-to-reduce-shopping-cart-abandonment | 2020-09-18T11:28:27 | CC-MAIN-2020-40 | 1600400187390.18 | [array(['https://files.readme.io/f3b4d69-Shopping_Cart_abondon.png',
'Shopping Cart abondon.png'], dtype=object)
array(['https://files.readme.io/f3b4d69-Shopping_Cart_abondon.png',
'Click to close...'], dtype=object) ] | docs.leanplum.com |
Currency
- Currency Setup
- Defines the base currency and any additional currencies that are accepted as payment. Also establishes the import connection and schedule that is used to update currency rates automatically.
- Currency Symbols
- Defines the currency symbols that appear in product prices and sales documents such as orders and invoices. Magento support currencies from over two hundred countries around the world.
- Updating Currency Rates
- Currency rates can be updated manually or imported into your store as needed, or according to a predefined schedule.
- Currency Chooser
- If multiple currencies are available, the currency chooser is available in the header of the store. | https://docs.magento.com/user-guide/stores/currency-overview.html | 2020-09-18T09:59:55 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.magento.com |
CLIENT is the LightWave Client process, which is responsible for relaying messages between a client application and a web service based on the contents of the API Definition.The process is started by running the CLIENT program from TACL or by configuring the program as a Pathway Server Class. This section describes how to start the program and available program options.
Starting CLIENT as Standalone Process
The CLIENT process may be started by running the CLIENT program from TACL.
tacl > run CLIENT / run-options / program-options
The "--api" and "--base-url" command-line-options are required. All others are optional. The run-options are the standard TACL run options. Note that the process does not optn the IN or OUT file. You should be logged-on as a user with sufficient privileges to access the system resources that the process requires.
Configuring CLIENT as a Pathway Server Class
The CLIENT process may be configured as a Pathway Server Class. This is the preferred method, as it allows CLIENT processes to be created and deleted to meet application demand. When configuring a Server Class, program options may be specified with the STARTUP attribute or specified as PARAMs.
Configuration Using the Server Class STARTUP Attribute
Program options may be supplied directly in the STARTUP string or entered into an EDIT file and supplied using a Command File.
reset server set server program client set server startup "--api myapi --base-url"
Or
reset server set server program client set server startup "@cmdfile" EDIT file cmdfile contents: --api myapi --base-url --log logfile info
Using the STARTUP attribute with a command file allows the program options to be modified without re-configuring the Server Class. Note that changes to the command file take effect when a new CLIENT process is started and have no effect on processes that are already running.
Configuration using Server Class PARAMs
Program options may be specified as individual PARAMs. When options are specified as PARAMs, do not include the leading '--' characters in the PARAM name which would make the name invalid. The PARAM values should be enclosed in quotes:
set server param api "myapi" set server param base-url ""
Some program options, such as cert-no-verify, do not require a value. The presence of these options on the command line activates the feature. Because Pathway PARAMs require a value, when specifying these option as a PARAM, specify any value for the PARAM to activate the feature. The value itself is ignored. For example:
set server param cert-no-verify "1" set server param cert-no-verify "true"
Because the PARAM value is ignored, both of these examples will activate the cert-no-verify feature.
CLIENT Program Options
@<command-file>
Reads command line options from <command-file>. Options specified on the command line override any duplicates specified in the file. At most, one '@' option may used. The file itself cannot contain an '@' option (i.e., no nesting).
--api <file-name>
The name of an API definition file. This option is required. The API definition must be exported from the LightWave Client filesystem into an API definition file through the LightWave Client Console or by using the CUTILITY -- export-api command. Changes to API definition can be incorporated while CLIENT is running (i.e., without restarting) if the
--monitor api option is used.
--api-param-<param-name> <param-value>
The API parameter value for the parameter <param-name>. This option should be specified for each parameter defined in the API definition. See Working with API Parameters.
--auth <file-name>
The name of an auth config file. For more information on auth configuration and request signing, see Request Authentication and Signing.
--base-url <url>
The base URL of the target web service in the form http[s]://host[:port]/[base-path]. Note that the optional base-path will be concatenated with the API operation path to form the full URL. This option is required.
--blob-files [ $vol ].subvol ].file-name-prefix [ userid,groupid | groupname,username ] [ security-string ] [ extents=<pri>,<sec>,<max> ]
A pattern which specifies the file system location and file name prefix for output BLOB files and optionally, the user id and file security. The file name prefix is limited to 1 to 3 characters with the remaining 5 characters assigned by the CLIENT process. If the volume or subvolume portion of the pattern is omitted, the process default volume and subvolume are used. If the option is omitted, the default pattern is "$current-vol.current-subvol.BLB" and the userid and file security are that of the CLIENT process. Note that client applications are responsible for disposing of the BLOB files once they have been processed.
--ca-root-certs <file-name>
The name of the file containing the certificates of all trusted root certificate authorities. If omitted, this value defaults the CAROOT file that is provided with the LightWave Client software.
--ca-local-certs <file-name>
The name of a file containing the certificates of local trusted certificate authorities. If omitted, no local CA certificates are loaded.
--cert-no-verify
The presence of this option indicates that the CLIENT process should not validate the server certificate for common name, expiration date, or issuer when an secure connection is established. Note that this option should only be used in a development/test environment where the server may not necessarily pass all of the verification criteria. It should not be used in a production environment.
The client certificate to use for the secure connection. Specify the name of the file and, if required, the associated pass phrase required to access the certificate. Specify the pass phrase, in plain text, or the name of an existing LightWave Credentials file containing the encrypted pass phrase.
--default-encoding <encoding-name>
Specified the default encoding to use for character string conversions for the API. Note that encoding settings applied to API method or data type definitions will override this setting. The <encoding-name> must be one of the names listed in Character Encoding Names. If omitted, the default encoding is ISO-8859-1.
--diag-log <config-spec> | +<diag-log-config-file> | <subvolume-name>
Enables diagnostic logging and specifies the subvolume where the logs are stored, the location of a log config file, or a string consisting of diagnostic log configuration options. Log files are named using the format DLnnnnnn where nnnnnn is a sequence number. The logs may be viewed using command line tools or from the LightWave Client Console if it is installed. See Diagnostic Log Configuration for information about diagnostic logging configuration files and config-spec options. Note that the <subvolume-name> option is available for compatibility with release prior to 1.0.5. Using a config-file or config-spec is recommended.
--disable-sensitive-data-masking
When present, the sensitive data masking feature is disabled and fields marked as sensitive will be displayed in HTTP and diagnostic logs. This option should only be used during application development when sensitive data is not contained in message payloads.
The credentials required for the HTTP connection to the Web service host when HTTP Basic or Digest authentication is required. Specify either your userid and password (in plain text) or the name of an existing credentials file containing the encrypted userid and password. See Using Credentials Files for information about creating credentials files. Supplying "pre-auth" indicates that pre-authentication should be used. This causes Basic authentication credentials to be sent with every request, without waiting for an HTTP 401 response. Use of this option can improve performance on connections that use HTTP Basic authentication but has serious security implications for non-HTTPS (TLS) connections. The "pre-auth" option should not be used unless the security implications are fully understood.
--http-proxy-host <address[:port]>
The host name or IP address and port of an HTTP proxy that should be used for HTTP/HTTPS connections. If omitted, no proxy is used. If the port value is omitted, port 80 is used.
--http-proxy-credentials { <userid>:<password> | +<credentials-file> }
The credentials required for the HTTP proxy. Specify either your userid and password (in plain text) or the name of an existing credentials file containing the encrypted userid and password. See Using Credentials Files for information about creating credentials files.
--http-request-timeout <milliseconds> [ ! ]
The number of milliseconds to wait for a web service request/response exchange to complete. If omitted, the default value of 60 seconds (60000 milliseconds) is used. If '!' is specified, this value will override any value set by the client application in the rq_timeout field of the LightWave request header. If '!' is not specified, and the client application specifies a timeout value in the rq_timeout field, the value in the rq_timeout field is used.
--license <file-name>
The name of an existing edit file containing the LightWave Client product license. If this option is omitted, the license file is located according to Product Licensing rules.
--log [ { <destination> | * } [ level [ format ] ] | +<log-config-file> ]
Specifies the process log location, the level, and the log event format, or the location of a log configuration file. The destination value may be a process name, a file name, or the asterisk (*) character. If the asterisk is used then the log output is directed to the home term of the process. The level value may be "error", "warning", "info", or "debug" and controls the type of information that is output to the log destination. The "error" level produces the least output while the "debug" level produces the most output. The format value may be "text" indicating that the log events should be output as text strings or "event" indicating that the log events should be output in EMS event format. If omitted, the default is "-log * info text". See Using Configuration Files for information about logging configuration files.
--monitor <option>[:<interval>] [ <option>[:<interval>] ] ...
Enables file monitoring and specifies the monitoring interval. If the interval is omitted, the default value is 15 seconds. The following files may be monitored: api, log, diag-log. See Using Configuration Files for information about monitoring log and diag-log configuration files.
--standalone
The presence of this options causes the process to ignore close messages. In order to function properly in the Pathway environment, the CLIENT process, by default, matches open and close messages and terminates when all clients have closed the process. This option can be used to prevent the process from terminating when it is run as a standalone process.
--string-padding
Specifies a string padding value that will override the string padding setting in the API definition. The value may be 'zeros', 'spaces', or an integer in the range 0 - 255.
--tcpip-bind-addr <ipv4-address>
Specifies an IPv4 address to bind to for TCP/IP connections. This option may be used to specify the IP address from which connections will originate when a TCPIP provider is configured with multiple IP addresses. If omitted, the default IP address for the TCPIP provider is used.
--tcpip-process <process name>
Specifies the name of the TCPIP process that the process should use. If omitted, the value of the =TCPIP^PROCESS^NAME MAP define is used if it exists, otherwise $ZTC0 is used.
Remarks
All command-line-option names and values are case-insensitive except where noted. If multiple occurrences of the same command line parameter are encountered, the setting of the last occurrence is used.
When the -log option format is set to event, EMS events will be sent to the output device with the following EMS Subsys ID:
Examples
Standalone Process
tacl> run CLIENT / name $lwc, nowait, term $zhome, cpu 0 / --standalone --api employee --base-url --log $0 info
tacl> run CLIENT / name $lwc, nowait, term $zhome / @cmdfile EDIT file cmdfile contents: --standalone --api employee --base-url --log $0 info
tacl> run CLIENT / name $lwc, nowait, term $zhome / --standalone --api employee --base-url --api-param-username johnsmith --api-param-api-key 35bddf603d7b4cef9fbaf1689c1cd49e
tacl> run CLIENT / name $lwc, nowait, term $zhome / --standalone --api employee --base-url --log +logcfg --monitor api:30 log:15
Pathway Server Class Using STARTUP Attribute
== Settings for linkdepth, maxservers, maxlinks, etc are examples, not recommendations. reset server set server cpus 0:1 set server createdelay 1 secs set server deletedelay 120 secs set server highpin on set server linkdepth 10 set server maxservers 10 set server maxlinks 10 set server numstatic 0 set server program client set server tmf off set server debug off set server startup "@cmdfile" add server example EDIT file cmdfile contents: --api employee --base-url --log +logcfg --monitor api:30 log:15
Pathway Server Class Using PARAMs
== Settings for linkdepth, maxservers, maxlinks, etc are examples, not recommendations. reset server set server cpus 0:1 set server createdelay 0 secs set server deletedelay 15 secs set server highpin on set server linkdepth 5 set server maxservers 1 set server maxlinks 20 set server numstatic 0 set server program client set server tmf off set server debug off set server param api "employee" set server param base-url "" set server param diag-log "+logconf" set server param log "+logconf" set server param monitor "api:5 diag-log:5 log:5" set server param tcpip-process "$ztc0" add server example | https://docs.nuwavetech.com/display/LWCLIENT/CLIENT | 2020-09-18T10:58:12 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.nuwavetech.com |
Perpetual Protocol is like Bitmex meets Uniswap. It’s a decentralized perpetual contract protocol for every asset, made possible by a novel Virtual Automated Market Maker (vAMM) design. More details
PERP tokens allow community members to engage in governance and staking for the Perpetual Protocol. More details
The main options to get PERP tokens now is either through liquidity mining or Balancer Liquidity Bootstrapping Pool (LBP). We also have a trading competition coming in a few weeks.
For more details:
Liquidity Mining Proposal
Launch Trading Competition Introduction
7.5M PERP tokens will be available on Balancer LBP around Sep 9, 2020, at 6:00 am UTC, and the Balancer LBP will be live for approximately 3 days. Check here to know more.
The Balancer LBP is like a usual Balancer Pool with a very high start price and heavy selling pressure. If you go in too early like the first few hours, you have to take more downward price pressure and might get rekt. The high price at the beginning is to prevent front-runners from grabbing all the profit. Balancer has a detailed blog post about how it works.
Because of the dynamic weight change provide by the Balancer LBP, the weight between the two pools (PERP:USDC) will change from 90:10 to 30:70. Every time the weight changes, the price will be less than the previous price. It creates a downward pressure for the price during the Blanacer LBP period.
The trading experience is just like the usual Balancer Pool. If there are more people buying PERP, the price goes up. Otherwise, the price goes down. The only difference is the Balancer LBP makes it harder for the price to go up.
We will deploy PERP tokens on Balancer in two phases:
Phase 1:
We will create a PERP/USDC Balancer LBP with 7,500,000 PERP tokens and 1,333,333 USDC.
The Balancer LBP starts from block: 10,825,600 and ends at block: 10,846,450 (approximately 3 days). The start time is around Sep 9, 2020, at 6:00 am UTC.
The weights will change gradually from the start (PERP:USDC = 90:10) to the end (PERP:USDC = 30:70) during that period.
At the end of the Balancer LBP, the Balancer LBP will cease and Phase 2 will begin
Phase 2:
Using the last price and part of the proceeds from LBP, a new PERP/cUSDT BSP will be seeded.
Incentivizing LPs to provide liquidity to the BSP, they will receive part of the Perpetual Protocol’s inflation rewards in the beginning. Details will be announced soon.
There will be a step-by-step guide on how to acquire PERP tokens from Balancer LBP soon.
Here is a step by step example that shows how many PERP tokens will Alice and Bob get when they swap USDC at different weights of the pool.
As you can see, if there are fewer buying orders, the price will move downward by time, especially at the beginning. If there are enough buying orders around the same time, the price will go up (Alice's case). But once there are fewer buying orders after, the price will move downward by time again.
For more details, you can check this gsheet for reference. Please remember the chart in the first tab is the price chart without any buy/sell orders.
Yes, that's what we believe. I think the most obvious reason to use a Balancer LBP is that it's very hard for the bots and front-runners to take the profit in the first block.
The 2nd reason is by having weight changing during a period of time and let the price goes down, the prices are more evenly distributed than just putting the tokens on Uniswap. Comparing below two charts:
We use the same amount of PERP and USDC tokens in these two charts and the same buying patterns. By using the Balancer LBP, token prices are more evenly distributed and the ending price is lower.
We’ll release a tutorial before the pool is live, so please stay tuned.
It's impossible to grab all the allocations on a Balancer Pool by design. The more tokens the whale gets, the larger slippage they have to deal with.
In order to prevent people from front-running other participants and speculating on PERP tokens rather than using them for governance and staking, PERP tokens will start at a high price then go down quickly as the weights of the pool change. It helps prevent bots and front-runners.
🚨DO NOT PURCHASE PERP TOKENS TOO EARLY OR FOR ANY REASON OTHER THAN GOVERNANCE OR STAKING, OR YOU WILL GET REKT!🚨
Participants should not purchase PERP tokens when LBP starts, there will be a very large slippage to be dealt with.
If you want to submit a very large order, the best strategy for you is to divide the order into small chunks and spread them out over 3 days to average down the price.
Otherwise, you can just pick the price you want to enter and wait for some time. Don't come in in the first few hours. We have a google sheet in which you can make a copy and run some simulation on it.
Yes, any token supported on Balancer can be used to exchange for our token (but usually, the exchange rate will be worse due to multiple hops between different pools on Balancer).
It’s possible that during the three-day existence of the Balancer’s Liquidity Bootstrapping Pool (LBP), someone who has acquired our token and created a pool for people to buy it. But we expect there will be lots of scam pools on Uniswap. Therefore, it’s recommended only to get PERP from our LBP on Balancer.
Metamask, Portis, or any wallet that can be connected to a desktop Dapp through wallet connect should be fine. You can test if your wallet is supported now by going to Balancer’s website and swapping a token to another.
The goal of the Perpetual Protocol is to build a 1) permissionless protocol with 2) price discovery provided by a vAMM. We think it’s not aligned with our goal to have a whitelist and a fixed price.
Our team members are also very decentralized. Team members and advisors are from Europe, America, and Asia, core members are based in Taiwan, all have solid working experience in the blockchain industry like crypto exchanges, payment solution services, etc.
We have raised $1.8M in a strategic round led by Multicoin Capital with participation from Zee Prime Capital, Three Arrows Capital, CMS Holdings, LLC., and Alameda Research who is strategically partnered with FTX.
Other strategic angels include Binance Labs, Andrew Kang, George Lambeth, Calvin Liu, Tony Sheng, Alex Pack, and Regan Bozman. | https://docs.perp.fi/getting-started/token-distribution-faq | 2020-09-18T09:47:22 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.perp.fi |
Installation¶
Compatibility¶
Numba is compatible with Python 3.6 or later, and Numpy versions 1.15 or later.
Our supported platforms are:
- Linux x86 (32-bit and 64-bit)
- Linux ppcle64 (POWER8)
- Windows 7 and later (32-bit and 64-bit)
- OS X 10.9 and later (64-bit)
- NVIDIA GPUs of compute capability 2.0 and later
- AMD ROC dGPUs (linux only and not for AMD Carrizo or Kaveri APU)
- ARMv7 (32-bit little-endian, such as Raspberry Pi 2 and 3)
- ARMv8 (64-bit little-endian, such as the NVIDIA Jetson)
Automatic parallelization with @jit is only available on 64-bit platforms.
Installing using conda on x86/x86_64/POWER Platforms¶
The easiest way to install Numba and get updates is by using
conda,
a cross-platform package manager and software distribution maintained
by Anaconda, Inc. You can either use Anaconda to get the full stack in one download,
or Miniconda which will install
the minimum packages required for a conda environment.
Once you have conda installed, just type:
$ conda install numba
or:
$ conda update numba
Note that Numba, like Anaconda, only supports PPC in 64-bit little-endian mode.
To enable CUDA GPU support for Numba, install the latest graphics drivers from
NVIDIA for your platform.
(Note that the open source Nouveau drivers shipped by default with many Linux
distributions do not support CUDA.) Then install the
cudatoolkit package:
$ conda install cudatoolkit
You do not need to install the CUDA SDK from NVIDIA.
Installing using pip on x86/x86_64 Platforms¶
Binary wheels for Windows, Mac, and Linux are also available from PyPI. You can install Numba using
pip:
$ pip install numba
This will download all of the needed dependencies as well. You do not need to have LLVM installed to use Numba (in fact, Numba will ignore all LLVM versions installed on the system) as the required components are bundled into the llvmlite wheel.
To use CUDA with Numba installed by pip, you need to install the CUDA SDK from NVIDIA. Please refer to Setting CUDA Installation Path for details. Numba can also detect CUDA libraries installed system-wide on Linux.
Enabling AMD ROCm GPU Support¶
The ROCm Platform allows GPU computing with AMD GPUs on Linux. To enable ROCm support in Numba, conda is required, so begin with an Anaconda or Miniconda installation with Numba 0.40 or later installed. Then:
Follow the ROCm installation instructions.
Install
roctoolsconda package from the
numbachannel:
$ conda install -c numba roctools
See the roc-examples repository for sample notebooks.
Installing on Linux ARMv7 Platforms¶
Berryconda is a
conda-based Python distribution for the Raspberry Pi. We are now uploading
packages to the
numba channel on Anaconda Cloud for 32-bit little-endian,
ARMv7-based boards, which currently includes the Raspberry Pi 2 and 3,
but not the Pi 1 or Zero. These can be installed using conda from the
numba channel:
$ conda install -c numba numba
Berryconda and Numba may work on other Linux-based ARMv7 systems, but this has not been tested.
Installing on Linux ARMv8 (AArch64) Platforms¶
We build and test conda packages on the NVIDIA Jetson TX2, but they are likely to work for other AArch64 platforms. (Note that while the Raspberry Pi CPU is 64-bit, Raspbian runs it in 32-bit mode, so look at Installing on Linux ARMv7 Platforms instead.)
Conda-forge support for AArch64 is still quite experimental and packages are limited, but it does work enough for Numba to build and pass tests. To set up the environment:
Install conda4aarch64. This will create a minimal conda environment.
Add the
c4aarch64and
conda-forgechannels to your conda configuration:
$ conda config --add channels c4aarch64 $ conda config --add channels conda-forge
Then you can install Numba from the
numbachannel:
$ conda install -c numba numba
On CUDA-enabled systems, like the Jetson, the CUDA toolkit should be automatically detected in the environment.
Installing from source¶
Installing Numba from source is fairly straightforward (similar to other Python packages), but installing llvmlite can be quite challenging due to the need for a special LLVM build. If you are building from source for the purposes of Numba development, see Build environment for details on how to create a Numba development environment with conda.
If you are building Numba from source for other reasons, first follow the llvmlite installation guide. Once that is completed, you can download the latest Numba source code from Github:
$ git clone git://github.com/numba/numba.git
Source archives of the latest release can also be found on
PyPI. In addition to
llvmlite, you will also need:
- A C compiler compatible with your Python installation. If you are using Anaconda, you can use the following conda packages:
- Linux
x86:
gcc_linux-32and
gxx_linux-32
- Linux
x86_64:
gcc_linux-64and
gxx_linux-64
- Linux
POWER:
gcc_linux-ppc64leand
gxx_linux-ppc64le
- Linux
ARM: no conda packages, use the system compiler
- Mac OSX:
clang_osx-64and
clangxx_osx-64or the system compiler at
/usr/bin/clang(Mojave onwards)
- Windows: a version of Visual Studio appropriate for the Python version in use
- NumPy
Then you can build and install Numba from the top level of the source tree:
$ python setup.py install
Build time environment variables and configuration of optional components¶
Below are environment variables that are applicable to altering how Numba would otherwise build by default along with information on configuration options.
NUMBA_DISABLE_OPENMP (default: not set)¶
To disable compilation of the OpenMP threading backend set this environment variable to a non-empty string when building. If not set (default):
- For Linux and Windows it is necessary to provide OpenMP C headers and runtime libraries compatible with the compiler tool chain mentioned above, and for these to be accessible to the compiler via standard flags.
- For OSX the conda packages
llvm-openmpand
intel-openmpprovide suitable C headers and libraries. If the compilation requirements are not met the OpenMP threading backend will not be compiled
NUMBA_DISABLE_TBB (default: not set)¶
To disable the compilation of the TBB threading backend set this environment variable to a non-empty string when building. If not set (default) the TBB C headers and libraries must be available at compile time. If building with
conda buildthis requirement can be met by installing the
tbb-develpackage. If not building with
conda buildthe requirement can be met via a system installation of TBB or through the use of the
TBBROOTenvironment variable to provide the location of the TBB installation. For more information about setting
TBBROOTsee the Intel documentation.
Dependency List¶
Numba has numerous required and optional dependencies which additionally may vary with target operating system and hardware. The following lists them all (as of July 2020).
Required build time:
setuptools
numpy
llvmlite
- Compiler toolchain mentioned above
Required run time:
setuptools
numpy
llvmlite
Optional build time:
See Build time environment variables and configuration of optional components for more details about additional options for the configuration and specification of these optional components.
llvm-openmp(OSX) - provides headers for compiling OpenMP support into Numba’s threading backend
intel-openmp(OSX) - provides OpenMP library support for Numba’s threading backend.
tbb-devel- provides TBB headers/libraries for compiling TBB support into Numba’s threading backend
Optional runtime are:
scipy- provides cython bindings used in Numba’s
np.linalg.*support
tbb- provides the TBB runtime libraries used by Numba’s TBB threading backend
jinja2- for “pretty” type annotation output (HTML) via the
numbaCLI
cffi- permits use of CFFI bindings in Numba compiled functions
intel-openmp- (OSX) provides OpenMP library support for Numba’s OpenMP threading backend
ipython- if in use, caching will use IPython’s cache directories/caching still works
pyyaml- permits the use of a
.numba_config.yamlfile for storing per project configuration options
colorama- makes error message highlighting work
icc_rt- (numba channel) allows Numba to use Intel SVML for extra performance
pygments- for “pretty” type annotation
gdbas an executable on the
$PATH- if you would like to use the gdb support
- Compiler toolchain mentioned above, if you would like to use
pyccfor Ahead-of-Time (AOT) compilation
r2pipe- required for assembly CFG inspection.
radare2as an executable on the
$PATH- required for assembly CFG inspection. See here for information on obtaining and installing.
graphviz- for some CFG inspection functionality.
pickle5- provides Python 3.8 pickling features for faster pickling in Python 3.6 and 3.7.
To build the documentation:
sphinx
pygments
sphinx_rtd_theme
numpydoc
makeas an executable on the
$PATH
Checking your installation¶
You should be able to import Numba from the Python prompt:
$ python Python 3.8.1 (default, Jan 8 2020, 16:15:59) [Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import numba >>> numba.__version__ '0.48.0'
You can also try executing the
numba --sysinfo (or
numba -s for short)
command to report information about your system capabilities. See Command line interface for
further information.
$ numba -s System info: -------------------------------------------------------------------------------- __Time Stamp__ 2018-08-28 15:46:24.631054 __Hardware Information__ Machine : x86_64 CPU Name : haswell CPU Features : aes avx avx2 bmi bmi2 cmov cx16 f16c fma fsgsbase lzcnt mmx movbe pclmul popcnt rdrnd sse sse2 sse3 sse4.1 sse4.2 ssse3 xsave xsaveopt __OS Information__ Platform : Darwin-17.6.0-x86_64-i386-64bit Release : 17.6.0 System Name : Darwin Version : Darwin Kernel Version 17.6.0: Tue May 8 15:22:16 PDT 2018; root:xnu-4570.61.1~1/RELEASE_X86_64 OS specific info : 10.13.5 x86_64 __Python Information__ Python Compiler : GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final) Python Implementation : CPython Python Version : 2.7.15 Python Locale : en_US UTF-8 __LLVM information__ LLVM version : 6.0.0 __CUDA Information__ Found 1 CUDA devices id 0 GeForce GT 750M [SUPPORTED] compute capability: 3.0 pci device id: 0 pci bus id: 1
(output truncated due to length) | https://numba.readthedocs.io/en/stable/user/installing.html | 2020-09-18T09:45:45 | CC-MAIN-2020-40 | 1600400187390.18 | [] | numba.readthedocs.io |
1 Introduction
You must be a Company Admin to access this page and these settings.
There are three tabs on the Users page with settings to manage:
2 Users Tab
On the Users tab, you can view, deactivate, and activate users.
You can also create reports about your company’s users by clicking Create Report, view the apps of a user by clicking Show Apps, and reset a user’s password by clicking Reset Password.
2.1 Deactivating Users. For more information, see the Merging Your Accounts of Mendix Profile.
Before deactivating a user, make sure the following points are true for that user:
- They do not have a Company Contact, App Contact, or Technical Contact role
- They are not the only SCRUM Master in an App Team
- They are not involved in unsolved support tickets with Mendix Support
For more information, see How to Manage Company & App Roles and Company & App Roles.
To deactivate a user, follow these steps:
Select the check box for the user(s) you want to deactivate, then click Activate / Deactivate user.:
In the pop-up window that appears, confirm your decision by clicking Deactivate member(s):
The deactivated user will become inactive and will immediately disappear from the list of users on this tab. If you click Filter and select Inactive, you will see the deactivated users.
You can only deactivate a user. It is not possible to delete a user completely.
2.2 Activating Users
To activate an inactive user, follow these steps:
Click Filter and select Inactive to see the list of company users extended with inactive (deactivated) users:
Select the inactive user and click Activate / Deactivate user.
In the pop-up window that appears, click Activate accounts.
2.3 Creating a Report
Click Create Report to create a report about users active in your company. These are users who are either members of the company or members of an app owned by the company.
You have the following report options:
- Export users – this report will return a list of users who are active in your company
- Export permissions – this report will return a list of permissions for users active in your company’s apps
You can export these reports by clicking Export to Excel. Note that the exports will contain further details in addition to those shown on the screen.
3 Security Groups Tab
This tab lists the security groups defined for your company. Users who are assigned to security groups are automatically granted access to specified AppCloud apps.
You can perform the following actions on this tab:
- Add and Remove security groups
Click Details to edit a security group and do the following:
- Under Users, you can Add users to and Remove users from the group
- Under Access To, you can up the security group’s access to specific apps (via the Add and Remove buttons)
- Members of this security group will be granted access to these apps automatically
- It is only possible to create access policies for licensed AppCloud-enabled apps
- Under Select Environment, you can select a specific node environment for the app
- Under Select Role(s), you can select specific user roles for the app
4 Security History Tab
On this tab, you can view all the changes made in the company, such as Password reset requested and Account activated. | https://docs.mendix.com/developerportal/company-app-roles/users | 2019-05-19T10:53:14 | CC-MAIN-2019-22 | 1558232254751.58 | [array(['attachments/user-settings.png', None], dtype=object)] | docs.mendix.com |
available products..
Templates properties
Templates
If the entity that is connected to the list view has specializations, you can optionally specify templates for each specialization. For each row in the list view the most specific template is shown. The different templates can be selected by clicking the extra header that appears when a specialization template is added.
Let us say you have an entity Vehicle and two specializations thereof: Bicycle and Car. And there is a specialization of Car called SportsCar. You create a list view that is connected to Vehicle. With the templates property of the list view you specify what template to show for arbitrary Vehicles. For the specializations Bicycle and Car you create separate templates to show them.
Now if there is a row of type Bicycle the template specific for bicycles will be shown. A row of type Car will be shown in the template for Car. A row of type SportsCar is shown in the template for Car. There is no template specific for sports cars (in this example) and Car is the ‘closest’ generalization for which there is a template.. | https://docs.mendix.com/refguide5/list-view | 2019-05-19T10:24:36 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.mendix.com |
What Sqreen detects and protects you from¶ Vulnerabilities¶ Injections (OWASP A1)¶ Sqreen can detect and prevent the execution of the most critical injection based vulnerabilities. SQL injection (SQLi). NoSQL injection (NoSQLi). Command injection. Local File Inclusion (LFI). Cross-site scripting - XSS (OWASP A3)¶ Sqreen can detect and prevent the execution of reflected XSS on the server side. On top of that, Sqreen can help you craft and deploy a Content Security Policy (CSP) and set the X-XSS-Protection browser header. Components with known vulnerabilities (OWASP A9)¶ Sqreen can alert you when the libraries used by the application contain known vulnerabilities. Additionally, Sqreen can detect and block Shellshock based attacks. Client-side (browser)¶ Sqreen enables you to set up various browser security headers, covering the following vulnerabilities: Click jacking (X-Frame-Options) MIME sniffing (Mime-content-type) Attacks targetting users (OWASP A2)¶ Account takeovers¶ Sqreen can detect and block Account Takeovers attacks performed using brute-force or credentials stuffing. Account farming¶ Sqreen can detect and block IPs creating too many accounts at once. Those accounts are often used for fraudulent purposes like phishing, posting fraudulent content, and so on. Suspicious activities¶ Sqreen can detect the following suspicious activities performed by the application's users: DarkNet/TOR or VPNs connections. Suspicious geo-locations. IP & email reputation. Simultaneous locations. | https://docs.sqreen.com/protection/vulnerabilities-attacks-covered/ | 2019-05-19T11:51:21 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.sqreen.com |
JsonDataSource Wizard
The JSON Data Source Wizard allows you to create a new or edit an existing JsonDataSource component based on several settings. After the wizard appears you have to perform the following steps:
Choose a JSON Source
Choose between external file or inline string.
Optionally use the data selector to query and filter the JSON data.
The data selector is a JSONPath string which will be used to query the data. For more information please refer to How to: Use JSONPath to filter JSON data.
Preview Data Source Results
Preview the result set returned by the data source. | https://docs.telerik.com/reporting/jsondatasource-wizard | 2019-05-19T11:01:31 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.telerik.com |
This chapter describes the standard Teradata-supplied character translation codes that support the use of the corresponding client character sets. Teradata does not supply any character sets for installation on a client.
Note: Some client character sets are not supported with some Teradata client applications and drivers. For information on supported client character sets, see the user guide for the client application or driver you want to use. | https://docs.teradata.com/reader/yKxpuYv1DGjVp_g62SgwBw/tJ6ZoFPXf4oUy7lK2vF__g | 2019-05-19T10:37:37 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.teradata.com |
NAME
This..
- gpl
The distribution is distributed under the terms of the GNU General Public License version 2 ().
- lgpl
The distribution is distributed under the terms of the GNU Lesser General Public License version 2 ().
- artistic
The distribution is licensed under the Artistic License version 1, as specified by the Artistic file in the standard perl distribution ().
- bsd
The distribution is licensed under the BSD 3-Clause
modules (which can also mean a collection of modules), but some things are
scripts.
- requires
March 14, 2003 (Pi day) - created version 1.0 of this document.
May 8, 2003 - added the "dynamic_config" field, which was missing from the initial version. | http://docs.activestate.com/activeperl/5.22/perl/lib/CPAN/Meta/History/Meta_1_0.html | 2019-05-19T11:16:34 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.activestate.com |
File:Cal calendars used.png
Size of this preview: 337 × 599 pixels. Other resolutions: 135 × 240 pixels | 720 × 1,280 pixels.
Original file (720 × 1,280 pixels, file size: 114 KB, MIME type: image/png)
Select calendars for Callproof to use.
File history
Click on a date/time to view the file as it appeared at that time.
- You cannot overwrite this file.
File usage
The following page links to this file: | http://docs.callproof.com/index.php/File:Cal_calendars_used.png | 2019-05-19T10:27:20 | CC-MAIN-2019-22 | 1558232254751.58 | [array(['/thumb.php?f=Cal_calendars_used.png&width=337',
'File:Cal calendars used.png'], dtype=object) ] | docs.callproof.com |
Building GRUB2 for diferent platfoms¶
Since DRLM version 2, we moved to GRUB2 to provide the netboot images to start ReaR recovery images from network. This movement was the first step to provide support for mulitple platforms for GNU/Linux because GRUB2 supports multiple architerctures.
At this time DRLM built packages include all documented platforms in this guide.
Prepare your build host¶
Note
This document describes the process of building DRLM GRUB2 netboot images for diferent platforms with a debian machine. The process should be the same on other distros, just adjusting package dependecies for target distro and install them with the package management tools provided by each distro should work without problems.
Install required packages¶
$
Start build process¶
Warning
All documented grub2 image builds are included in drlm packages, this document will be a kind of guide for troubleshooting and testing on new GRUB2 versions and also a guide to, contributors of future drlm grub2 images, on new supported platforms to the project.
Provide DRLM branded GRUB2 build¶
$ vi grub-core/normal/main.c .. replace: msg_formatted = grub_xasprintf (_("GNU GRUB version %s"), PACKAGE_VERSION); .. with: msg_formatted = grub_xasprintf (_("DRLM Boot Manager (GNU GRUB2)"), PACKAGE_VERSION);
Prepare your build environment:¶
$ ./autogen.sh
On next steps we will proceed with configuration and build for each platform needed.
For i386-pc:¶
$ ./configure --disable-werror $ make && make install $ /usr/local/bin/grub-mknetdir -d /usr/local/lib/grub/i386-pc --net-directory=/tmp Netboot directory for i386-pc created. Configure your DHCP server to point to /tmp/boot/grub/i386-pc/core.0
For 32-bit EFI:¶
$ ./configure --with-platform=efi --target=i386 --disable-werror $ make && make install $ /usr/local/bin/grub-mknetdir -d /usr/local/lib/grub/i386-efi --net-directory=/tmp Netboot directory for i386-efi created. Configure your DHCP server to point to /tmp/boot/grub/i386-efi/core.efi
For 64-bit (U)EFI:¶
$ ./configure --with-platform=efi --target=x86_64 --disable-werror $ make && make install $ /usr/local/bin/grub-mknetdir -d /usr/local/lib/grub/x86_64-efi --net-directory=/tmp Netboot directory for x86_64-efi created. Configure your DHCP server to point to /tmp/boot/grub/x86_64-efi/core.efi
Create a tarball with targeted platform netboot image¶
$ cd /tmp $ tar -cvzf drlm_grub2_<target>-<platform>.tar.gz boot/
Note
This gzipped tarball can be extracted to DRLM $STORDIR on your DRLM server, for testing purposes or to provide support to new platforms not yet provided by DRLM package builds. | http://docs.drlm.org/en/2.2.1/building_grub2.html | 2019-05-19T11:25:02 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.drlm.org |
Testcase to check if the purchase order generation in procurement candidates is not grouped by user.
Note: RfQ in general is documented in Testcase_Fresh-402
Make sure the Set Up for your testing is ok according to the testcase above (emails etc.)
Create an RfQ Topic (Ausschreibungs-Thema), with each of G000X’s users set in tab Subscribers | http://docs.metasfresh.org/tests_collection/testcases/Testcase_FRESH-676.html | 2019-05-19T11:21:59 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.metasfresh.org |
The entire structure is setup in inRiver and imported into Litium including field types, templates, relationships and image mapping. The Connector itself only needs configuring.
Everything is setup according to our integration best practice. For example, prices and stockbalance are always imported directly from the ERP.
Please note that the Litium inRiver connector automatically works with the on-premise version of inRiver. As of today, it does not work automatically with the cloud version of inRiver.
Litium Studio Connector exports enriched products from inRiver for publishing in Litium.
Integrating inRiver PIM directly with Litium streamlines the product management processes and optimizes product information timeliness. Using the inRiver Connector decreases development time as well as reduces project risk.
About Litium
Join the Litium team
Support | https://docs.litium.com/add-ons/connectors/inriver-connector-6-1-2 | 2019-05-19T11:11:14 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.litium.com |
Do not raise exceptions in filter blocks
Cause
A method contains a filtered exception handler, and the filter raises exceptions.
Rule Description
When an exception filter raises an exception, the exception is caught by the common language runtime, and the filter returns false. This behavior is indistinguishable from the filter executing and returning false intentionally.
Currently, Visual Basic .NET and the Visual C++ support exception filters.
How to Fix Violations
To fix a violation of this rule, do not raise exceptions in an exception filter.
When to Exclude Warnings
Do not exclude a warning from this rule. There are no scenarios under which an exception raised by an exception filter provides a benefit to the executing code.
Example
The following example shows a type that violates this rule. Note that the filter appears to return false when it throws an exception.
Imports System Module ExceptionFilter Public Class FilterTest ' The following function is used as an exception filter. ' Violates rule: DoNotRaiseExceptionsInFilterBlocks. Public Function AlwaysTrueFilter(label as String) as Boolean Console.WriteLine("In filter for {0}.", label) ' The following exception terminates the filter. ' The common language runtime does not return the exception to the user. Throw New ApplicationException("Filter generated exception.") Return True End Function Public Sub ThrowException() Try Throw New ApplicationException("First exception.") ' Because the filter throws an exception, this catch clause fails to catch the exception. Catch e as ApplicationException When Not AlwaysTrueFilter("First") Console.WriteLine("Catching first filtered ApplicationException {0}", e) ' Because the previous catch fails, this catch handles the exception. Catch e as ApplicationException Console.WriteLine("Catching any ApplicationException - {0}", e.Message) End Try ' The behavior is the same when the filter test is reversed. Try Throw New ApplicationException("Second exception.") ' Change the filter test from When Not to When. ' This catch fails. Catch e as ApplicationException When AlwaysTrueFilter("Second") Console.WriteLine("Catching second filtered ApplicationException {0}", e) ' This catch handles the exception. Catch e as ApplicationException Console.WriteLine("Catching any ApplicationException - {0}", e.Message) End Try End Sub End Class Sub Main() Dim test as FilterTest = New FilterTest() test.ThrowException() End Sub End Module
The example produces the following output.
Output
In filter for First. Catching any ApplicationException - First exception. In filter for Second. Catching any ApplicationException - Second exception. | https://docs.microsoft.com/en-us/previous-versions/dotnet/netframework-3.0/ms182337(v=vs.80) | 2019-05-19T10:32:50 | CC-MAIN-2019-22 | 1558232254751.58 | [] | docs.microsoft.com |
public class CoverageIO extends Object
CoverageAccesss and specific
CoverageSources, and performing simple encoding and decoding.
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public static boolean canConnect(Map<String,Serializable> params)
If this datasource requires a number of parameters then this method should check that they are all present and that they are all valid. If the datasource is a file reading data source then the extensions or mime types of any files specified should be checked. For example, a Shapefile datasource should check that the url param ends with shp, such tests should be case insensitive.
params- The full set of information needed to construct a live data source.
public static CoverageAccess connect(Map<String,Serializable> params, Hints hints, ProgressListener listener) throws IOException
IOException
public static CoverageAccess connect(Map<String,Serializable> params) throws IOException
IOException
public static Set<Driver> getAvailableDrivers()
Driverwhich have registered using the services mechanism, and that have the appropriate libraries on the class-path.
Setof all discovered drivers which have registered factories, and whose available method returns true.
public static void scanForPlugins()
public static Driver[] getAvailableDriversArray()
Driverimplementations.
It can be used together basic information about all the available
GridCoverage
plugins. Note that this method finds all the implemented plugins but returns only the
available one.
A plugin could be implemented but not available due to missing dependencies.
Driverimplementations.
public static Set<Driver> findDrivers(URL url)
url- is the object to search a
Driverthat is able to read
Setcomprising all the
Driverthat can read the
URLurl.
public static Driver findDriver(URL url)
Driverthat is able to read a certain object. If no
Driveris able to read such an
Objectwe return an null object.
url- the object to check for acceptance.
Driverthat has stated to accept this
URLo or
nullin no plugins was able to accept it. | http://docs.geotools.org/stable/javadocs/org/geotools/coverage/io/impl/CoverageIO.html | 2017-03-23T04:23:08 | CC-MAIN-2017-13 | 1490218186774.43 | [] | docs.geotools.org |
.
Installation¶
Available through Python Package Index:
$ pip install neo4jrestclient
Or the old way:
$ easy_install neo4jrestclient
You can also install the development branch:
$ pip install git+))
Authentication¶
Authentication-based services like Heroku_ are also supported by passing extra parameters:
>>>>> gdb = GraphDatabase(url, username="username", password="password")
And when using certificates (both files must be in PEM_ format):
>>> gdb = GraphDatabase(url, username="username", password="password", cert_file='path/to/file.cert', key_file='path/to/file.key') | http://neo4j-rest-client.readthedocs.io/en/latest/info.html | 2017-03-23T04:13:35 | CC-MAIN-2017-13 | 1490218186774.43 | [] | neo4j-rest-client.readthedocs.io |
4. OpenQuake Platform connection settings¶
Some of the functionalities provided by the plugin, such as the ability to work with GEM data, require the interaction between the plugin itself and the OpenQuake Platform (OQ-Platform). The OQ-Platform is a web-based portal to visualize, explore and share GEM’s datasets, tools and models. In the Platform Settings dialog displayed in Fig. 4.1, credentials must be inserted to authenticate the user and to allow the user to log into the OQ-Platform. In the Host field insert the URL of GEM’s production installation of the OQ-Platform or a different installation if you have URL access. If you still haven’t registered to use the OQ-Platform, you can do so by clicking Register to the OQ-Platform. This will open a new web browser and a sign up page. The checkbox labeled Developer mode (requires restart) can be used to increase the verbosity of logging. The latter is useful for developers or advanced users because logging is critical for troubleshooting, but it is not recommended for standard users. | http://docs.openquake.org/oq-irmt-qgis/v1.7.9/04_connection_settings.html | 2017-03-23T04:12:55 | CC-MAIN-2017-13 | 1490218186774.43 | [] | docs.openquake.org |
General requires correct cluster-safe caching in Confluence 5.4 will need to find an alternate solution to that problem. | https://docs.atlassian.com/atlassian-cache-compat-tests/1.1/atlassian-cache-compat-tests/ | 2017-03-23T04:49:40 | CC-MAIN-2017-13 | 1490218186774.43 | [] | docs.atlassian.com |
Adding Covers
THIS EDIT IS IN PROGRESS - STILL NEEDS IMAGES AND ADDITIONAL TEXT
For more information on covers, please see the Covers FAQ.
An Illustrated OI Tutorial
Contents
Logging in
Log into the system on the front page by using the username/email and password...
Adding Covers Step-by-step
1. Choose an issue to add a cover to Search at the website for the comic you want to submit a cover for. Each series has a section called "Cover Status". If the cover you want to submit isn't on the website, the Cover Status should show that issue as white. If you want to replace an existing cover, please see Replacing Covers Step-by-step below. If you want to add an additional cover to an issue that currently has a cover, see Adding Additional Covers Step-by-step below.
2. Click the number of the issue You will be taken to a page asking for the name of the file to upload. Click the "Browse" button to find the cover on your local computer.
3. Source If you have gotten the cover from an external source (please ask if this is OK) mention it in 'Source'.
4. Remember the source
5. Mark cover for replacement If you believe the cover is in poor condition and does not meet the standards listed on the upload page, tick the mark (such as in case of a bad scan for a rare comic).
6. Wraparound cover If the cover is a single image that wraps around to both sides of the comic, tick the mark.
7. Gatefold cover If the cover is a gatefold cover, folded into the comic, tick the mark.
Per a vote of 2010-11-11, when a gatefold or wraparound cover exists for a publication, only the full version of that cover is considered a proper or 'best' cover image. If the contribution of a full gatefold or wraparound cover is not possible due to technical limitations on the contributor's end, the contributor MUST [1] note in the Comments field that only the front portion of an extended cover is being uploaded and [2] mark the cover for replacement. Gatefold or wraparound covers that are currently in the database in miniature, as well as front-only portions of such covers, should be marked for replacement by users when noticed. This policy only applies to publications ONLY with sequences (most commonly illustrations) that span front and back portions of a cover, including fold-out portions in the case of gatefold covers -- NOT to publications with discrete content on what are commonly separated as front and back covers.
8. Comments Add any information you wish to communicate to the editor. Examples are details about the source or condition, or questions you want to ask the editor.
9. Upload When you have all the information ready, click the "Upload" button.
If for some reason you can't submit covers through this page, feel free to contact the editors via the Error Tracker at . Covers can be attached to the error report after it is submitted or, alternatively, one of the editors could contact you with an email address that the cover can be sent to.
Adding Additional Covers Step-by-step
Variant covers can be submitted via the 'Edit cover' link present on the issue page, the 'Edit covers' link on the large cover view, and the 'Add/replace cover' link under each cover on a series cover gallery page.
For adding a flip cover, you simply hit the "Add additional cover (dust jacket, flip cover)" button from the "Edit Covers" page you find at the bottom of a cover view page.
Moving Covers Step-by-step
For moving a cover from one issue to another use the "Edit with another issue" button instead of the "Edit" button. On the following page one can select/find the issue to/from which the cover should be moved. The selection works by searching for an issue, or choosing the issue which was 'remembered' before, or entering the issue ID (the final digits in the URL of an issue, for example from you will need "154638"). Besides moving covers by pressing the button "Move covers between both issues", you can make other changes to the issues before submitting as usual, if desired. | http://docs.comics.org/wiki/Adding_Covers | 2017-03-23T04:19:38 | CC-MAIN-2017-13 | 1490218186774.43 | [array(['/images/thumb/5/5e/Login_box.jpg/150px-Login_box.jpg',
'Login box.jpg'], dtype=object)
array(['/images/thumb/0/0f/Searchbar.jpg/700px-Searchbar.jpg',
'Searchbar.jpg'], dtype=object) ] | docs.comics.org |
Installation & Setup
Thank you for choosing Savoy for your new project! We have created a short video to show you how to install the theme and (optionally) import its demo content:
Note: If the video is blurry, click the cogwheel icon and select a higher resolution.
To start the theme's setup wizard manually, navigate to the Appearance → Theme Setup page in the WordPress admin.
Troubleshooting
Having trouble installing the theme? The links below might be helpful: | http://docs.nordicmade.com/savoy/ | 2019-01-16T06:59:52 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.nordicmade.com |
Being able to manage people and groups remotely is useful in different test scenarios, development environments and, for example, when group permissions should be set up on folders.
However, it should be noted that in a production environment Alfresco is usually connected to an LDAP system and users (people) and groups are then synchronized (imported) from the LDAP system, including group memberships. | https://docs.alfresco.com/6.2/concepts/dev-api-by-language-alf-rest-manage-people-groups-intro.html | 2020-11-24T04:33:12 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.alfresco.com |
Controlling Access to Objects
Each object class in Alloy Navigator
Create — allows role members to create objects.
Delete — allows role members to delete objects.
NOTE: Several object classes in Alloy Navigator
Expresscan have approval steps in their lifecycle. In order to enable technicians to delete or modify Approval Stages and Approval Requests you must also grant them the Modify permission on such objects.
Modify — allows role members to modify objects.
NOTE: Granting the Modify permission on Products will also enable a technician to create, modify, and delete Vendor Products.
IMPORTANT: We recommend that all modifications of objects in Alloy Navigator
Expressare always implemented through Actions. The Modify permission should be granted only to administrators who have a good understanding of how direct modifications may affect the system. For details, see Controlling Availability of Actions.
View — allows role members to browse and view objects. The View permission also controls the ability to view commands for accessing the module that houses those objects. For example, technicians without the View permission on
Ticketswill see neither the link for accessing Ticketsin the Sidebar nor the Ticketscommand in the Go menu in their Desktop App and Web App, and will be unable to configure My Calendar to view Tickets.
NOTE: In order to enable technicians to view Approval Stages and Approval Requests, you must also grant the View permission on the primary object class in the approval workflow, e.g. on Change Requests.
NOTE: Granting the View permission on Products will also enable technicians to view Vendor Products.
Service Desk > Ticket >Manage Activities— allows role members to modify and delete activity records for Tickets.
Service Desk > Announcement > Announcement Management— a special permission for Announcements. The Announcement Management access permission implicitly includes the Create, Delete, Modify, and View permissions for viewing and managing Announcements.
IT Assets > Consumable > Manage Rules— a special permission for Threshold Notification Rules (their lifecycle is not controlled through workflow). The Manage Rules access permission grants access to the Consumable Management module and implicitly includes the Create, Delete, Modify, and View permissions for viewing and managing Threshold Notification Rules.
Some special user access permissions are grouped under Miscellaneous:
Report — the Create, Delete, Modify, and View permissions for Reports allow role members to create, delete, modify reports and report folders, view the list of reports and generate (run) reports.
IMPORTANT: In order to enable technicians to generate reports, you must additionally grant the View permission on objects contained in those reports (on
Tickets, Computers, Consumables, etc.). Otherwise, these reports will be unavailable for users. For details on reports, see Help: Reports.
Customer Satisfaction Rating — these permissions control access to rating information for
Ticketscollected from Self Service Portal customers. The View All Ratings permission allows role provides the ability to view star ratings and comments for all Tickets. The View Own Ratings permission works similarly, however the scope of visible ratings and comments is limited to Tickets where the person is the Assignee.
NOTE: In order to collect rating information from customers, you must create and maintain a customer satisfaction survey. For details, see The page has been moved to here.
Reference Tables — this is a special group for the Management permission for objects whose lifecycle is not controlled through workflow, i.e.
Manufacturers, Networks,and Company Addresses. The Management access permission implicitly includes View, Add, Modify, and Delete permissions for viewing and managing Manufacturers, Networks,and Company Addresses. | https://docs.alloysoftware.com/alloynavigatorexpress/8/docs/adminguide/adminguide/account-administration/understanding-security-roles/controlling-access-to-objects.htm | 2020-11-24T04:19:41 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.alloysoftware.com |
, you should ensure the cluster is healthy. To check the health of your cluster:
Using the Cloud Foundry Command Line Interface (cf CLI), target the API endpoint of your Ops Manager Ops Manager
UAA-ACCESS-TOKENis the access token you recorded in the previous step.
In the response to the above request, locate the product with an
installation_namestarting with
cf-and copy its
guid.
Run:
curl "" \ -X GET \ -H "Authorization: Bearer UAA-ACCESS-TOKEN"
Where:
OPS. Ops Manager Examples
This section describes two sizing examples for internal MySQL in PAS. Use this data as guidance to ensure your MySQL clusters are scaled to handle the number of app instances running on your deployment.
Example 1: Pivotal Web Services Production Environment
The information in this section comes from Pivotal Web Services (PWS).
Note: This deployment differs from most Ops Manager: Diego Test Environment
The information in this section comes from an environment used: | https://docs.pivotal.io/application-service/2-10/operating/internal-databases.html | 2020-11-24T04:31:14 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.pivotal.io |
Our Pop-Ups Pro widget is the complete solution for creating custom and powerful popup modals. Great for engaging site visitors to promote sales, newsletter signups, event announcements and more.
Several initialization modes allow pop-ups to be launched automatically (with custom timing), opened via custom button, or triggered by scroll position.
An extensive feature set provides loads of options for creating the ultimate popup. Additional features include pop-up animation defaults, visit tracking, and full custom control of pop-up content via graphic styles.
CREATING A POP-UP
POP-UP CONFIGURATION
If you find that the Disable Pop-Up On Mobile setting is grayed out, enable the Trigger Pop-Up On Page Load setting in the Pop-Up Trigger section. Mobile pop-ups may only be disabled in this mode.
No commonly asked questions
No known issues or conflicts | http://docs.muse-themes.com/widgets/pop-ups-pro | 2020-11-24T03:48:35 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.muse-themes.com |
Monitoring server usage
You can monitor activity in your server using Amazon CloudWatch and AWS CloudTrail. For further analysis, you can also record server activity as readable, near real-time metrics.
Topics
Enable AWS CloudTrail logging
You can monitor AWS Transfer Family API calls using AWS CloudTrail. By monitoring API calls, you can get useful security and operational information. For more information about how to work with CloudTrail and AWS Transfer Family, see Logging and monitoring in AWS Transfer Family.
If you have Amazon S3 object level
logging enabled,
RoleSessionName is contained in
principalId as
[AWS:Role Unique
Identifier]:username.sessionid@server-id. For more information about
AWS Identity and Access Management (IAM) role unique identifiers, see Unique
identifiers in the AWS Identity and Access Management User Guide.
The maximum length of the
RoleSessionName is 64 characters. If the
RoleSessionName is longer, the
will be truncated.
server-id
Logging Amazon S3 API calls to S3 access logs
If you are using Amazon S3
access logs to identify S3 requests made on behalf of your file transfer
users,
RoleSessionName is used to display which IAM role was assumed to
service the file transfers. It also displays additional information such as the user
name, session id, and server-id used for the transfers. The format is
[AWS:Role
Unique Identifier]:username.sessionid@server-id and is contained in
principalId. For more information about IAM role unique identifiers,
see Unique
identifiers in the AWS Identity and Access Management User Guide.
Log activity with CloudWatch
To set access, you create a resource-based IAM policy and an IAM role that provides that access information.
To enable Amazon CloudWatch logging, you start by creating an IAM policy that enables CloudWatch logging. You then create an IAM role and attach the policy to it. You can do this when you are creating a server or by editing an existing server. For more information about CloudWatch, see What Is Amazon CloudWatch? and What is Amazon CloudWatch Logs? in the Amazon CloudWatch User Guide.
To create an IAM policy
Use the following example policy to create your own IAM policy that allows CloudWatch logging. For information about how to create a policy for AWS Transfer Family, see Create an IAM role and policy.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:DescribeLogStreams", "logs:CreateLogGroup", "logs:PutLogEvents" ], "Resource": "arn:aws:logs:*:*:log-group:/aws/transfer/*" } ] }
You then create a role and attach the CloudWatch Logs policy that you created.
To create an IAM role and attach a policy
In the navigation pane, choose Roles, and then choose Create role.
On the Create role page, make sure that AWS service is chosen.
Choose Transfer from the service list, and then choose Next: Permissions. This establishes a trust relationship between AWS Transfer Family and the IAM role.
In the Attach permissions policies section, locate and choose the CloudWatch Logs policy that you just created, and choose Next: Tags.
(Optional) Enter a key and value for a tag, and choose Next: Review.
On the Review page, enter a name and description for your new role, and then choose Create role.
To view the logs, choose the Server ID to open the server configuration page, and choose View logs. You are redirected to the CloudWatch console where you can see your log streams.
On the CloudWatch page for your server, you can see records of user authentication
(success
and failure), data uploads (
PUT operations), and data downloads
(
GET operations).
Using CloudWatch metrics for Transfer Family
You can get information about your server using CloudWatch metrics. A metric represents a time-ordered set of data points that are published to CloudWatch. When using metrics, you must specify the Transfer Family namespace, metric name, and dimension. For more information about metrics, see Metrics in the Amazon CloudWatch User Guide.
The following table describes the CloudWatch metrics for Transfer Family. These metrics are measured in 5-minute intervals.
Transfer Family dimensions
A dimension is a name/value pair that is part of the identity of a metric. For more information about dimensions, see Dimensions in the Amazon CloudWatch User Guide.
The following table describes the CloudWatch dimension for Transfer Family. | https://docs.aws.amazon.com/transfer/latest/userguide/monitoring.html | 2020-11-24T04:32:27 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.aws.amazon.com |
Session cards
From Genesys Documentation
This topic is part of the manual Genesys Predictive Engagement Administrator's Guide for version Current of Genesys Predictive Engagement.
Contents
Feature coming soon!Learn how to configure the sessions data that agents can see.
Related pages:
Prerequisites
- Configure the following permissions in Genesys Cloud:
- Journey > Session Type > View
- Journey > Session Type > Add
- Journey > Session Type > Edit
- Journey > Session Type >
- Journey > Event Type > View
- Journey > Event Type > Add
- Journey > Event Type > Edit
- Journey > Event Type >
Overview
When agents interact with customers, they see session cards that present the history of session. Each session type has its own session card layout.
You can configure the appearance of session cards in the following ways:
- Custom session cards
- Web session cards
- Conversation session cards
- No configuration options
Preview the layout of a session card
- In Admin in Genesys Cloud, open the Session Library page.
- Click session name and then click the Session card tab.
- Click the session card to see how the full journey context map will appear.
Layout
- Train tracks: Best for web-based activity. Shows a user's path through your website and the events that occur at each step in their journey.
- List: Displays activity in list form.
- Linear: Best for conversational activity.
Icon
Title
For custom sessions and web sessions, you can set the session card title. The session card title is for all sessions of the given session type.
- All web sessions have the same type. By default, the title for all web sessions is Card title. You can change the title to Web Visit to clarify the customer's interaction.
- Each custom session has its own type. In the Bike delivery scenario, the session type is delivery. The card title for all delivery sessions is Delivery. You can set a different session card name for each custom session type.
Segments matched and outcomes achieved
Session attributes
For custom events, you can select which session attributes appear across the top of session cards. You can select up to three session attributes. Select the session attributes that appear on the card and then save your changes. The session attributes appear in order, from left to right, across the top of the card.
Make a session card visible to agents
Set the Display to agents toggle for the session card to Yes. This toggle appears at the top of every page in the Session Library. For more information, see Display session cards to agents. | https://all.docs.genesys.com/ATC/Current/AdminGuide/Session_card | 2020-11-24T03:44:52 | CC-MAIN-2020-50 | 1606141171077.4 | [] | all.docs.genesys.com |
This section of the documentation provides some hands-on tasks to help you to build your first applications with the SDK.
This section of the documentation provides hands-on tasks that help you develop simple applications using the SDK. To keep things simple modifications are made to a standard Xcode template, Single View Application. The first tutorial in the series can be used to test you have everything set up correctly.
Note: These tutorials generally assume you have downloaded the Mobile SDK source code into the directory mobile_sdk/iOS. If you have not already done so, you can find instructions on how to do this in the task Obtaining the source code. | https://docs.alfresco.com/mobile_sdk/ios/concepts/tutorials.html | 2020-11-24T04:16:50 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.alfresco.com |
Use the Approve & Manage link under Comment Control to approve and manage comments submitted by end users about Knowledge Base articles.
The Approve & Manage link uses the Bamboo KB Rating and Comments List. To edit the columns used in the list, edit the list settings. To do this, navigate to the list by selecting Site Actions > View All Site Content > Lists > Bamboo KB Ratings and Comments. Use the ribbon to edit the list settings by selecting List > List Settings.
Warning: If you are using workflows to approve and manage comments, do not manually approve comments. Doing so will stop the workflow task from approving comments, and the Workflow Task List will display open tasks. Choose to approve comments using only one method (workflows or manual approval). | https://docs.bamboosolutions.com/document/comment_control/ | 2020-11-24T04:05:19 | CC-MAIN-2020-50 | 1606141171077.4 | [array(['/wp-content/uploads/2017/06/sa05-2010-usingadmin20.jpg',
'sa05-2010-usingadmin20.jpg'], dtype=object) ] | docs.bamboosolutions.com |
Universal Store: Journey to Continuous Delivery and DevOps
By: Sam Guckenheimer
Overview
The “Microsoft Universal Store Team” (UST) is the main commercial engine of Microsoft with the mission to bring One Universal Store for all commerce at Microsoft. The UST encompasses everything Microsoft sells and everything others sell through the company, consumer and commercial, digital and physical, subscription and transaction, via all channels and storefronts.
To achieve the objective of one Universal Store, UST had to bring together multiple teams from across Microsoft with different engineering systems, engineering culture, and processes into one streamlined delivery cadence catering to cost, agility and quality needs of the business. Continuous Delivery is a key part of the UST transformation. Azure DevOps in the Microsoft One Engineering System (1ES) made continuous delivery practical for the UST. This case study provides an overview of the approach, the challenges and the process adopted to help achieve the objective.
DevOps Improvements at a Glance
The UST piloted Azure DevOps in December 2015 through Feb 2016 and went into production in March 2016. The fastest way to see the changes is to contrast a few key metrics, showing improvement of 40x to 8000x year over year.
Before DevOps
Before moving to DevOps, the UST had seven disparate engineering systems, with source code distributed across multiple TFS instances, many custom build, test and deployment solutions, with different release workflows, policy requirements and compliance needs.
Culturally, teams had varying levels of delivery maturity and engineering processes. Business problems came first, and engineering debt accumulated on different legacy systems over a period of time. Teams also had been reorganized and felt fatigue migrating from one custom solution to another. On top of that, the 1ES was not mature enough to handle UST’s scale. No-one was excited about continuous delivery since it involved yet another ES migration.
Guiding Principles Moving to DevOps
The UST put a set of principles in place to make the transformation to DevOps a “north star”.
1ES Alignment. Leverage the investment Microsoft was making more broadly in a common engineering system. Contribute back to improve the 1ES when appropriate.
Build First, Standardize Next. Prove the best in class systems and tools in practice first, then drive to standardize on them to shorten the learning curve.
Extend to 3rd party. Use the opportunity of internal solutions for 1ES to identify what can work for external customers (“3rd party”) where possible.
Perform While Transform. Reduce cost and cycle time and improve code velocity. Consistently deliver and show incremental value with each iteration.
Continuous learning. Gather constant user feedback to learn and improve continuously.
Self-service. Create a lean operational footprint, with better scale, reliability and performance by allowing engineers to set up their own environments.
Results: Continuous Integration and Continuous Delivery
Git
Git under Azure DevOps is now the standard version control for UST, with more than 2000 Git repos in use among 4000 monthly active users. The UST transitioned from centralized version control to Git in the beginning of 2016 and currently more than 70% of the code bases in UST are in Git. As part of the process, we were also able to drive productivity improvements by consolidating or retiring several legacy services and their code bases.
Package Management
UST retired its custom NuGet servers and moved to Azure DevOps native package management. This has resulted a trusted package feed with higher reliability. Teams are now empowered to manage their own feeds helping with version control and standardization.
Deployment
UST now has more than 10,000 monthly deployments through Azure DevOps Release Management, up from 0 at the beginning of 2016. Most UST services use an experimentation service called AutoPilot. Teams therefore required help in moving their workflows into Azure DevOps with deployment into AutoPilot. We had a-la-carte of tools to help this including a custom solution for E2E delivery (from modeling to release) and Azure DevOps tasks for deploying binaries to AutoPilot. This resulted in a 72% MoM growth of Azure DevOps deployments within the first 6 months.
Agile Work Management
The UST leverages the work management features heavily to track stories, scenarios and work items at the team level. At the same time, the UST wanted to consolidate portfolio views integrating multiple data sources by organizational structure. To help with this, the UST created a Team Map to align organizational data to other data domains such as Azure DevOps Area Paths.
Continuous Testing in DevOps
Testing practices had to change radically to enable this level of automation. In UST, teams had set of tests which they wanted to reuse. UST provided them with plug-ins for running existing Selenium and JavaScript test cases to unblock them with integration issues. Currently 30% of our builds have automated tests as part of the build definitions.
Microservices
Moving to DevOps has enabled an architectural move to refactor previous monoliths into microservices. The UST uses a declarative service model to decouple its functions into discrete services.
Internal Open Source: Contributions Back to Azure DevOps
When UST needed more capabilities in Azure DevOps to accelerate the adoption of continuous delivery, UST contributed to the Azure DevOps project directly. In this way, UST could make Azure DevOps better for everyone rather than create custom tooling. The following table lists the Azure DevOps features contributed to by UST.
Key learnings
Continuous Delivery is the Goal
Moving to Continuous Delivery requires changes in process and culture, not just tools. To get the business results, it is not enough to move code and process. You have to see the value in faster delivery to end-users.
Focus on small wins
The UST split its transformation into several phases – a phase each for work items, code, build, test and release / deployment. The pace and timing varied from one team to the next. Teams could choose to onboard to any phase in any order, although the typical sequence was to migrate code first, and then build, test and deployment. The teams would plan the work for onboarding for each phase and create a set of actions and a timeline based on their individual needs.
Kickstart the effort with coaching
Embedded SMEs partnered with teams to migrate their first service and coach users through onbarding their first project and overcoming the adoption barriers.
Enable developer self service
A principle was to help engineers and teams move as fast as they wanted. UST avoided central processes that hinder agility. For tooling, the UST created extensions in Azure DevOps extensions to automate release workflows. Self-help tools facilitated repetitive management tasks. Individual teams could determine their own policy and compliance tasks.
Summary
The adoption of retail Azure DevOps and 1ES tools have been critical to the success of UST. They enable a high degree of collaboration, a consistent way of managing code and releases across several services and teams and a means to share knowledge and best practices in a diverse environment. | https://docs.microsoft.com/en-us/azure/devops/learn/devops-at-microsoft/universal-store-journey-continuous-delivery-devops | 2020-11-24T04:29:30 | CC-MAIN-2020-50 | 1606141171077.4 | [array(['../_img/ust-cd-1.png', 'Before DevOps Figure 1'], dtype=object)
array(['../_img/ust-cd-2.png', 'Git under Azure DevOps Figure 2'],
dtype=object)
array(['../_img/ust-cd-3.png', 'Agile Work Management Figure 3'],
dtype=object)
array(['../_img/ust-cd-4.png',
'Contributions Back to Azure DevOps Figure 4'], dtype=object)] | docs.microsoft.com |
Troubleshooting USB Passthrough Devices documentation.
-.
- If you resume a suspended virtual machine that has a Linux guest operating system, the resume process might mount the USB devices at a different location on the file system.
-.
-. | https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-6D2A3FFF-8913-4BB4-9F41-2E1B3B78FEAC.html | 2020-11-24T04:07:47 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.vmware.com |
Microsoft SQL Database Audit Logs
You can track database administrative activity via Microsoft SQL Server for log search and custom alerts on Windows machines.
Before You Begin
In order to collect database audit logs, you must enable auditing of the SQL server logs. You can read more about auditing a database here:.
Use an account that has access to the Windows Security or Application Log.
To accomplish this, add a service account to the local Event Log Readers group.
To enable auditing of the SQL server database:
- Open a command window to configure the audit object access setting.
- Run the following command as an administrator:
auditpol /set /subcategory:"application generated" /success:enable /failure:enable
- Run the following command to grant the generate security audits permission to an account:
secpol.msc
- Go to the Local Security Policy tool and open Security Settings > Local Policies >.
- To create a server audit, open SQL Server Management Studio.
- In "Object Explorer," expand the Security folder.
- Right-click the Audits folder and select New Audit.
- Fill in the fields and choose either Windows Application log or Windows Security log as the audit destination.
In order to audit the Windows Security log, you must have access to the Event Log Readers on your local machine.
- When you are finished, click OK.
- Right click the newly created Audit and select Enable Audit.
- To create a server audit specification, go to "Object Explorer" and click the plus sign to expand the "Security" folder.
- Right-click the Server Audit Specifications folder and select New Server Audit Specification.
- Enter a name, choose the server audit created above, and configure the audit action types you want to log.
- For example, you could log the following:
- When you are finished, click OK.
- Right click the newly created Audit Specification and select Enable Audit Specification.
How to Configure This Event Source
- From your dashboard, select Data Collection on the left hand menu.
- When the Data Collection page appears, click the Setup Event Source dropdown and choose Add Event Source.
- From the “Raw Logs” section, click the Database Audit Logs icon. The “Add Event Source” panel appears.
- Choose your collector and event source. You can also name your event source if you want.
- Choose the timezone that matches the location of your event source logs.
- In the "Server" field, enter the IP address or the machine name of the server.
- In the "User Domain" field, enter the the domain of your credentials.
- Select existing credentials or create a new credential.
- In the "Password" field, enter the password for the SQL server.
- Click Save.
No Default Alerts
Please note that database audit logs do not have alerts built-in by default. You must create your own alerts.
Did this page help you? | https://docs.rapid7.com/insightidr/database-audit-logs/ | 2020-11-24T03:01:05 | CC-MAIN-2020-50 | 1606141171077.4 | [array(['/areas/docs/_repos//product-documentation__master/f9d3e5e101c2b1c3b68bc988ad2b8944b48ae896/insightidr/images/audit action types.jpg',
None], dtype=object) ] | docs.rapid7.com |
Jobs API - Getting information about jobs¶
GET
/v0/jobs/(.+)¶
There’re some operations that can start a job in Tinybird:
Import data via URL
Run a query
Populate data to a data source
When any of these operations start, the response contains a
job_idfield. We can get the information of this job as follows:
curl \ -H "Authorization: Bearer <token>" \ "" \
Depending on the job
kind(
import,
query, or
populate), it will return certain information related with the specific job, along with the status of the job.
Job
statuscan be one of the following:
waiting: The initial status of a job. When creating a job, it has to wait if there’re other jobs running
working: Once the job operation has started
done: The job has finished successfully
error: The job has finished with an error
{ "id": "c8ae13ef-e739-40b6-8bd5-b1e07c8671c2", "job_id": "c8ae13ef-e739-40b6-8bd5-b1e07c8671c2", "kind": "import", "status": "done", "statistics": { "bytes": 1913, "row_count": 2 }, "datasource": { "id": "t_0ab7a11969fa4f67985cec481f71a5c2", "name": "your_datasource_name", "cluster": null, "tags": {}, "created_at": "2020-07-15 10:52:21.900886", "updated_at": "2020-07-15 10:52:22.335639", "statistics": { "bytes": 1913, "row_count": 2 }, "replicated": false, "version": 0, "project": null, "used_by": [] } }
If there’s been an error in the import operation, the job response will also include a detailed error:
{ "id": "1f6a5a3d-cfcb-4244-ba0b-0bfa1d1752fb", "job_id": "1f6a5a3d-cfcb-4244-ba0b-0bfa1d1752fb", "kind": "import", "status": "error", "statistics": null, "datasource": { "id": "t_02043945875b4070ae975f3812444b76", "name": "your_datasource_name", "cluster": null, "tags": {}, "created_at": "2020-07-15 10:55:12.427269", "updated_at": "2020-07-15 10:55:12.427270", "statistics": null, "replicated": false, "version": 0, "project": null, "used_by": [] }, "quarantine_rows": 0, "invalid_lines": 0, "error": "There was an error with file contents", "errors": [ "There are blocks with errors", "failed to normalize the CSV chunk: [DB error] Cannot read DateTime: unexpected number of decimal digits for time zone offset: 6" ] } | https://docs.tinybird.co/api-reference/jobs-api.html | 2020-11-24T02:55:16 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.tinybird.co |
The Genesis Portfolio Pro plugin allows you to easily change the number of Portfolio Items shown on the Portfolio Archive page.
- From the admin, navigate to Portfolio Items → Archive Settings → Items Per Page
- Enter the number you want to display into the Archives show at most field
- Click Save Changes
| https://docs.bizbudding.com/classic-docs/change-the-number-of-portfolio-items-per-page/ | 2020-11-24T04:15:12 | CC-MAIN-2020-50 | 1606141171077.4 | [array(['https://s3.amazonaws.com/helpscout.net/docs/assets/5a03f50f2c7d3a272c0d866f/images/5a3b216c2c7d3a1943676ba0/file-7WqGrkhszw.png',
None], dtype=object) ] | docs.bizbudding.com |
MapControl for Windows Forms and WPF
The MapControl class enables you to display a symbolic or photorealistic map).
This control shows rich and customizable map data including road maps, aerial, 3D, views, directions, search results, and traffic. You can also display the user's location, directions, and points of interest. MapControl
This control wraps an instance of the UWP Windows.UI.Xaml.Controls.Maps.MapControl class..
Requirements
Before you can use this control, you must follow these instructions to configure your project to support XAML Islands.
Known issues and limitations
See our list of known issues for WPF and Windows Forms controls in the Windows Community Toolkit repo.
Syntax
<Window x:
Code example
private async void MapControl_Loaded(object sender, RoutedEventArgs e) { // Specify a known location. BasicGeoposition cityPosition = new BasicGeoposition() { Latitude = 47.604, Longitude = -122.329 }; var cityCenter = new Geopoint(cityPosition); // Set the map location. await (sender as MapControl).TrySetViewAsync(cityCenter, 12); }
Private Async Sub MapControl_Loaded(sender As Object, e As RoutedEventArgs) Dim cityPosition As BasicGeoposition = New BasicGeoposition() With { .Latitude = 47.604, .Longitude = -122.329 } Dim cityCenter = New Geopoint(cityPosition) Await (TryCast(sender, MapControl)).TrySetViewAsync(cityCenter, 12) End Sub
Properties
The following properties wrap corresponding properties of the wrapped UWP Windows.UI.Xaml.Controls.Maps.MapControl object. See the links in this table for more information about each property.
Methods
The following methods wrap corresponding methods of the wrapped UWP Windows.UI.Xaml.Controls.Maps.MapControl object. See the links in this table for more information about each method.
Events
The following events wrap corresponding events of the wrapped UWP Windows.UI.Xaml.Controls.Maps.MapControl object. See the links in this table for more information about each event. | https://docs.microsoft.com/fr-fr/windows/communitytoolkit/controls/wpf-winforms/mapcontrol | 2020-11-24T04:47:22 | CC-MAIN-2020-50 | 1606141171077.4 | [array(['../../resources/images/controls/mapcontrol.png',
'MapControl example'], dtype=object) ] | docs.microsoft.com |
Created.
Note: Many of the resources referenced here redirect to the core Unity Ads documentation.
Unity recommends always using the latest Ads SDK. The APIs discussed in many of these articles are only available in SDK versions 3.0 and higher.
If you are new to Unity Ads, follow these basic steps before implementation:
The following implementation resources are for games that are made with Unity:
Beyond implementation, Unity empowers you to fine-tune your strategy:
Have questions? We’re here to help! The following resources can assist in addressing your issue: | https://docs.unity3d.com/cn/2018.3/Manual/UnityAds.html | 2020-11-24T04:32:31 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.unity3d.com |
Godot Docs – 3.0 branch¶
Tip
This is the documentation for the stable 3.0 branch. Looking for the documentation of the current development branch? Have a look here. For the stable 2.1 branch, it’s here.. translate present documentation into your
language, or talk to us on either the
#documentation
channel on Discord, or the
#godotengine-doc channel on irc.freenode.net!
The main documentation for the site is organized into the following sections: | https://godot-es-docs.readthedocs.io/en/stable/ | 2020-11-24T04:33:29 | CC-MAIN-2020-50 | 1606141171077.4 | [array(['https://hosted.weblate.org/widgets/godot-engine/-/godot-docs/287x66-white.png',
'Translation state'], dtype=object) ] | godot-es-docs.readthedocs.io |
Can Phoenix perform a File level backup of Windows drives A:\ and B:\
This article applies to:
- OS: Windows
- Product edition: Phoenix
Can Phoenix perform a File level backup of the Windows drives A:\ and B:\?
A:\ and B:\ drives (or drive letters) are usually reserved for floppy disk drives. If the device does not have a floppy disk drive, the drive letters A and B can be assigned to volumes.
Phoenix agent checks whether a drive is fixed or removable and ignores all the removable drives. Since A:\ and B:\ are removable drives by default, Phoenix ignores their backup. | https://docs.druva.com/Knowledge_Base/Phoenix/FAQs_and_Reference_Reads/Can_Phoenix_perform_a_File_level_backup_of_Windows_drives_A%3A%5C_and_B%3A%5C | 2020-11-24T03:59:33 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.druva.com |
Troubleshooting
Unable to start the NFS server on Phoenix Backup Store
This error can occur if the NFS server cannot find the backup mount path on the Phoenix Backup Store.
Resolution
Check if the backup mount path is present in the /etc/exports file. If the path is missing, add the path to the backup mount in the /etc/exports file manually.
Unable to start the Phoenix Backup Store service
This issue can occur due to multiple reasons.
Resolution
- If the NFS service is not running, start the NFS service.
- For other issues, contact Druva Support.
Disabled a disconnected Phoenix Backup Store but the Phoenix Backup Store is in "waiting to disable" state.
This happens when the Phoenix Cloud waits for the Phoenix Backup Store to get connected so that it can disable it. However, if the Phoenix Backup Store is decommissioned or there is no way it can connect to the Phoenix Cloud, it stays in the waiting to disable state on the Phoenix Management Console.
Resolution
- Re-register the Phoenix Backup Store
- Contact Druva Support to get it disabled on the Phoenix Management Console
Replaced an old Phoenix Backup Store with a new Phoenix Backup Store and then disabled the new Phoenix Backup Store. However, it didn't get disabled and it is in "waiting to disable" state.
This happens when the PhoenixBackupStore service is still running on the old Phoenix Backup Store.
Resolution
Contact Druva Support to disable the new Phoenix Backup Store and stop the PhoenixBackupStore service on the old Phoenix Backup Store. For more information, see prerequisites to re-registering a Phoenix Backup Store.
Unable to deploy a Phoenix Backup Store on a VMware setup.
The Phoenix Backup Store deployment fails when the MD5 checksum of the downloaded OVA package does not match with the MD5 checksum of the package mentioned on the downloads page. This can occur if there was a problem with the download.
Resolution
Ensure that the MD5 checksum of the downloaded OVA package matches with the MD5 checksum on the downloads page before deploying the Phoenix Backup Store OVA package. The MD5 checksum is mentioned below the Phoenix Backup Store package. | https://docs.druva.com/Phoenix/030_Configure_Phoenix_for_Backup/060_Backup_and_Restore_Oracle_Databases/060_Troubleshooting_and_FAQs/010_Troubleshooting | 2020-11-24T04:23:00 | CC-MAIN-2020-50 | 1606141171077.4 | [array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2',
'File:/tick.png'], dtype=object)
array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2',
'File:/cross.png'], dtype=object)
array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2',
'File:/tick.png'], dtype=object) ] | docs.druva.com |
Adding and Binding Services Using Apps Manager
Page last updated:
This topic describes how to use Apps Manager to add and bind service instances through the Marketplace. The Marketplace provides users with self-service, on-demand provisioning of add-on services.
For more information about how to add Managed Services to your Cloud Foundry deployment, refer to the Services topics.
To use a service with your application, you must access the Services Marketplace, create and configure an instance of the service, then bind the service instance to your application.
Step 1: Access the Marketplace
Follow the steps below to access the Marketplace.
Log in to Apps Manager for your Cloud Foundry deployment.
In the left navigation panel, click Marketplace.
Note: You can also access the Marketplace from a Space page or from an App Dashboard.
Step 2: Create and Configure a Service Instance
Follow the steps below to create and configure an instance of a service.
In the. | https://docs.pivotal.io/application-service/2-9/appsman-services/adding-services-marketplace.html | 2020-11-24T04:27:02 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.pivotal.io |
- Add the WebView dashlet to the default dashboard for users.
The WebView dashlet is not part of the default dashboard for users so we need to add it in order to be able to work with it when implementing this customization.
The easiest way to add a dashlet permanently to the user dashboard is to define a new preset for the dashboard layout with id user-dashboard. Create a new presets directory under the aio/aio-share-jar/src/main/resources/alfresco/web-extension/site-data directory.Now, add a file called presets.xml to the new presets directory:
<?xml version='1.0' encoding='UTF-8'?> <presets> <!-- Override wll known preset used to generate the default User dashboard. Add the Web View Dashlet so we can check if customization works. --> <preset id="user-dashboard"> <components> <!-- title <component> <scope>page</scope> <region-id>title</region-id> <source-id>user/${userid}/dashboard</source-id> <url>/components/title/user-dashboard-title</url> </component> --> <!-- dashboard components --> <component> <scope>page</scope> <region-id>full-width-dashlet</region-id> <source-id>user/${userid}/dashboard</source-id> <url>/components/dashlets/dynamic-welcome</url> <properties> <dashboardType>user</dashboardType> </properties> </component> <component> <scope>page</scope> <region-id>component-1-1</region-id> <source-id>user/${userid}/dashboard</source-id> <url>/components/dashlets/my-sites</url> </component> <component> <scope>page</scope> <region-id>component-1-2</region-id> <source-id>user/${userid}/dashboard</source-id> <url>/components/dashlets/my-tasks</url> </component> <component> <scope>page</scope> <region-id>component-2-1</region-id> <source-id>user/${userid}/dashboard</source-id> <url>/components/dashlets/my-activities</url> </component> <component> <scope>page</scope> <region-id>component-2-2</region-id> <source-id>user/${userid}/dashboard</source-id> <url>/components/dashlets/webview</url> </component> <component> <scope>page</scope> <region-id>component-2-3</region-id> <source-id>user/${userid}/dashboard</source-id> <url>/components/dashlets/my-documents</url> <properties> <height>240</height> </properties> </component> </components> <pages> <page id="user/${userid}/dashboard"> <title>User Dashboard</title> <title-id>page.userDashboard.title</title-id> <description>Users dashboard page</description> <description-id>page.userDashboard.description</description-id> <template-instance>dashboard-2-columns-wide-right</template-instance> <authentication>user</authentication> </page> </pages> </preset> </presets>Here we have included the WebView dashlet as component-2-2, so it will be displayed in column 2 and row 2 in the Dashboard layout. If you do not know the url for the dashlet, then just add it manually to the Dashboard and use SurfBug to identify what url that is used.
- Identify the web script that delivers the content that should be customized.
For this we use the SurfBug tool. Once the tool is activated (from) we can identify the web script as follows:
Here we have scrolled down a bit on the Dashboard page so we have the WebView dashlet in front of us. Then we have clicked on the WebView dashlet. This brings up the above black box that contains information about what web script that is delivering the content for the dashlet. In this case it is the webview.get.* web script in package org.alfresco.components.dashlets that we need to target. You can also identify the web script via the URL (that is, /components/dashlets/webview).
- In the Share JAR project create a new web script override package org.alfresco.tutorials.customization.webview.controller.
The directory path that needs to be created is: aio/aio-share-jar/src/main/resources/alfresco/web-extension/site-webscripts/org/alfresco/tutorials/customization/webview/controller.
We can choose any package path we want and then specify it in the Surf Extension Module, we will see this in a bit. However, it is important that we use a package path that will not clash with another Extension Module, deployed by some other JAR.
For example, if we just used org.alfresco.tutorials.customization.webview and then another JAR was deployed with some other customization to the WebView dashlet, using the same package path. Then if one extension module is undeployed its customizations will still be picked up if the other module is active. This is because both modules are using the same package path.
- Add our version of the web script controller file called webview.get.js to the /tutorials/customization/webview/controller directory:
if (model.isDefault == true) { model.widgets[0].options.webviewTitle = "Alfresco!"; model.widgets[0].options.webviewURI = ""; model.widgets[0].options.isDefault = false; }This controller will be processed after the out-of-the-box WebView controller. So what we are doing is just adding some stuff to the model widgets to tell the dashlet to load the Alfresco home page by default.
By inspecting the source of both the out-of-the-box controller and the template, you can work out what model properties the template is using (see tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets). This allows you to determine whether or not you can update the model after the base controller but before the template to create the required result.
- Add a new Surf Extension Modules file called customize-webscript-controller-extension-modules.xml to the aio/aio-share-jar/src/main/resources/alfresco/web-extension/site-data/extensions directory (note. it is important to give this file a unique name when several Share JARs are installed, otherwise the last one wins):
<extension> <modules> <module> <id>Customize Web Script Controller for Web View Dashlet</id> <version>1.0</version> <auto-deploy>true</auto-deploy> <customizations> <customization> <targetPackageRoot>org.alfresco.components.dashlets</targetPackageRoot> <sourcePackageRoot>org.alfresco.tutorials.customization.webview.controller</sourcePackageRoot> </customization> </customizations> </module> </modules> </extension>
This extension module identifies the package with the web script that we want to override by setting the targetPackageRoot property. When we have set what web script to override we use the sourcePackageRoot property to tell Alfresco where to pick up the customized web script files.
This module will be deployed automatically when the application server is started as we have the auto-deploy property set to true.
- The implementation of this sample is now done, build and start the application server as follows:
/all-in-one$ ./run.sh build_startNote. when defining presets for sites and users know that they are stored in the database after first time usage. In this tutorial we defined a new user preset to display a slightly different user dashboard. If a user, such as admin that we most likely use with the SDK, has been logging in before this customization was applied, then that user will already have a user dashboard preset in the database. So the customization will not appear to work. But wipe out alf_data_dev, if you can, and restart and you will see that it works.
- Now, log in to Share () and you will see the WebView dashlet loaded with the home page:Note: A Surf Extension module like this can be deployed and undeployed during runtime. And this means that an Administrator can control when different customizations should be visible or hidden. This is managed via the Module deployment page that can be found at:.
The custom JavaScript is executed after the original. The original JavaScript sets up an initial model object, which the default FreeMarker template can use to render content. Controller extensions then have the opportunity to change that model and thus change the rendered output. Using this approach is dependent upon the template making use of the changed model content - just adding content to the model will have no effect unless the template is also updated to make use of the additional model data.
It is not always possible to use this approach to customize existing components, as it depends on how the JavaScript controller and template are implemented, but the approach is worth exploring. | https://docs.alfresco.com/6.2/tasks/dev-extensions-share-tutorials-js-customize.html | 2020-11-24T03:56:49 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.alfresco.com |
Master Views
Master views are alternate versions of masters designed for specific contexts. They allow you to create a master once and then rearrange, resize, and restyle its widgets to fit each context you intend to use it in. When you add an instance of a master to the canvas, you can choose which of its views to display.
Adding Views to a MasterAdding Views to a Master
To add master views to a master, start by opening the master on the canvas for editing. Click the canvas to focus the master itself, and then click Add Master Views in the Style pane to open the Master Views dialog, where you'll create and manage your views.
To remove master views from a master, click Remove Views at the top-right of the Style pane.
Master View InheritanceMaster View Inheritance
Master views are organized into chains of inheritance. The first link in the chain, the view from which all others inherit, is the Base view. Each view you add inherits its widgets and widget properties either directly from the Base view or from another view in the chain.
For example, the chain of inheritance for a button master might looks like this:
Primary Button (Base) > Secondary Button > Text Link Button
Edits made in the Primary Button view would be reflected in both the Secondary Button and Text Link Button views as well.
Edits made in the Secondary Button view would be reflected in the Text Link Button view but not in the Primary Button view.
Edits made in the Text Link Button view would only affect that view.
Editing Diagrams in Master ViewsEditing Diagrams in Master Views
Once you've added master views to a master, you can access each view by clicking its name at the top of the canvas. The color of a view's name indicates whether or not it will be affected by edits you make on the canvas:
- master view edit inheritance, we suggest you take a top-down approach to editing your diagrams, starting in the Base view and then working your way down the chain.
Cross-View Widget PropertiesCross-View Widget Properties
You can change the visual styling, size, and position of widgets freely across master shared across views.
If you need a cross-view widget property to vary across your master views, create an additional copy of the widget for each variation and use the "unplace" feature to choose which version of the widget appears in each view.
Unplaced WidgetsUnplaced Widgets
"Unplaced" widgets are widgets that are included in some of a master's views but not in others. Any widgets that have been unplaced from the current view are listed in red in the Outline pane.
Note
To ensure that you see both placed and unplaced widgets in the Outline pane, click the Sort and Filter icon at the top-right of the pane and select Placed or Unplaced.
master instead of just unplacing it.
Choosing a Master View on the CanvasChoosing a Master View on the Canvas
When you add an instance of a master to the canvas, use the Master Views dropdown in the Style pane to choose which of its views to display.
Master Views and Adaptive ViewsMaster Views and Adaptive Views
You can set up your master views to work in conjunction with your adaptive view sets. Make sure that your master views have the same names and inheritance structure as your adaptive views, and your master views will switch automatically in the browser along with your adaptive views. | https://docs.axure.com/axure-rp/reference/master-views/ | 2020-11-24T04:13:14 | CC-MAIN-2020-50 | 1606141171077.4 | [array(['/assets/screenshots/axure-rp/master-views1.png', None],
dtype=object)
array(['/assets/screenshots/axure-rp/master-views2.png', None],
dtype=object)
array(['/assets/screenshots/axure-rp/master-views3.png', None],
dtype=object)
array(['/assets/screenshots/axure-rp/master-views4.png', None],
dtype=object)
array(['/assets/screenshots/axure-rp/master-views5.png', None],
dtype=object)
array(['/assets/screenshots/axure-rp/master-views6.png', None],
dtype=object)
array(['/assets/screenshots/axure-rp/master-views7.png', None],
dtype=object)
array(['/assets/screenshots/axure-rp/adaptive-views10.png', None],
dtype=object)
array(['/assets/screenshots/axure-rp/master-views8.png', None],
dtype=object) ] | docs.axure.com |
Why can't I see or add my organization's repositories?¶
If you can't see or add your organization's repositories, or have any problems regarding metrics (for example, you can't see any issues and your pull requests aren't analyzed), please check if your user account has a duplicated copy of the repository on the organization.
The ideal scenario for your organization's repositories is to have a unique copy of it added to your Codacy organization, by someone with write permissions on the repository.
In case you have a duplicated repository on your account, please delete it and use only the one available in your organization.
In the unlikely event of not seeing repositories for one or multiple organizations, please go to your GitHub settings and revoke the Codacy OAuth application.
After revoking Codacy from the GitHub Authorized OAuth Apps, go back to Codacy and add a repository to see the Authorize Codacy menu. You may have to click GitHub on the sidebar to request Codacys's permission on GitHub's side.
Click "Grant" on each organization, to see their repositories on Codacy.
If this didn't solve your issue, be sure to also check out the following pages: | https://docs.codacy.com/faq/repositories/why-cant-i-see-or-add-my-organizations-repositories/ | 2020-11-24T03:41:44 | CC-MAIN-2020-50 | 1606141171077.4 | [array(['../images/github-revoke-codacy.png',
'Revoking the Codacy OAuth application'], dtype=object)
array(['../images/github-authorize-codacy.png', 'Authorize Codacy'],
dtype=object) ] | docs.codacy.com |
.gitignore API
In GitLab, there is an API endpoint available for
.gitignore. For more
information on
gitignore, see the
Git documentation.
List
.gitignore templates
Get all
.gitignore templates.
GET /templates/gitignores
Example request:
curl
Example response:
[ { "key": "Actionscript", "name": "Actionscript" }, { "key": "Ada", "name": "Ada" }, { "key": "Agda", "name": "Agda" }, { "key": "Android", "name": "Android" }, { "key": "AppEngine", "name": "AppEngine" }, { "key": "AppceleratorTitanium", "name": "AppceleratorTitanium" }, { "key": "ArchLinuxPackages", "name": "ArchLinuxPackages" }, { "key": "Autotools", "name": "Autotools" }, { "key": "C", "name": "C" }, { "key": "C++", "name": "C++" }, { "key": "CFWheels", "name": "CFWheels" }, { "key": "CMake", "name": "CMake" }, { "key": "CUDA", "name": "CUDA" }, { "key": "CakePHP", "name": "CakePHP" }, { "key": "ChefCookbook", "name": "ChefCookbook" }, { "key": "Clojure", "name": "Clojure" }, { "key": "CodeIgniter", "name": "CodeIgniter" }, { "key": "CommonLisp", "name": "CommonLisp" }, { "key": "Composer", "name": "Composer" }, { "key": "Concrete5", "name": "Concrete5" } ]
Single
.gitignore template
Get a single
.gitignore template.
GET /templates/gitignores/:key
Example request:
curl
Example response:
{ "name": "Ruby", "content": "*.gem\n*.rbc\n/.config\n/coverage/\n/InstalledFiles\n/pkg/\n/spec/reports/\n/spec/examples.txt\n/test/tmp/\n/test/version_tmp/\n/tmp/\n\n# Used by dotenv library to load environment variables.\n# .env\n\n## Specific to RubyMotion:\n.dat*\n.repl_history\nbuild/\n*.bridgesupport\nbuild-iPhoneOS/\nbuild-iPhoneSimulator/\n\n## Specific to RubyMotion (use of CocoaPods):\n#\n# We recommend against adding the Pods directory to your .gitignore. However\n# you should judge for yourself, the pros and cons are mentioned at:\n#\n#\n# vendor/Pods/\n\n## Documentation cache and generated files:\n/.yardoc/\n/_yardoc/\n/doc/\n/rdoc/\n\n## Environment normalization:\n/.bundle/\n/vendor/bundle\n/lib/bundler/man/\n\n# for a library or gem, you might want to ignore these files since the code is\n# intended to run in multiple environments; otherwise, check them in:\n# Gemfile.lock\n# .ruby-version\n# .ruby-gemset\n\n# unless supporting rvm < 1.11.0 or doing something fancy, ignore this:\n.rvmrc | https://docs.gitlab.com/12.10/ee/api/templates/gitignores.html | 2020-11-24T03:43:53 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.gitlab.com |
Dynamically Adding Rows to a Repeater
In this tutorial, you'll learn how to dynamically add rows to repeater widgets using the Add Rows action.
Note
Click here to download the completed RP file for this tutorial.
1. Widget Setup1. Widget Setup
Open a new RP file and open Page 1 on the canvas.
Drag a repeater widget, a text field widget, and a button widget onto the canvas.
Set the button's text to
Add New Row.
2. Add a Row to the Repeater When the Button Is Clicked2. Add a Row to the Repeater When the Button Is Clicked
Select the button widget and click New Interaction in the Interactions pane.
Select the Click or Tap event in the list that appears, and then select the Add Rows action.
Select the repeater widget in the Target dropdown.
Click the Add Rows button. In the Add Rows to Repeater dialog that appears, click the fx icon to open the Edit Value dialog.
At the bottom of the dialog, click Add Local Variable.
In the third field of the new local variable row, select the text field widget. This local variable will capture the text field's text in the web browser.
In the upper field of the dialog, enter the local variable's name in brackets:
[[LVAR1]]
Click OK to close the Edit Value dialog and then click OK again to close the Add Rows to Repeater dialog.
Click OK in the Interactions pane to save the Add Rows action.
3. Preview the Interaction3. Preview the Interaction
Preview the page and enter some text in the text field.
Click the Add New Row button to add a new row to the repeater. The new row's rectangle widget should display the text from the text field. | https://docs.axure.com/axure-rp/reference/adding-repeater-rows/ | 2020-11-24T03:19:18 | CC-MAIN-2020-50 | 1606141171077.4 | [array(['/assets/screenshots/tutorials/repeaters-adding-rows-setup.png',
None], dtype=object)
array(['/assets/screenshots/tutorials/repeaters-adding-rows-onclick.png',
None], dtype=object) ] | docs.axure.com |
-
Export Data From a Card
When you want to share information from a usage analytics card, for example to share your top documents or content gap results, you can export a light-weighted data report (see Usage Analytics Data Exports). This data can be extracted from any card on your dashboard with the Data Explorer panel, so you can analyze and visualize it.
The data is exported in a CSV file (spreadsheet), which can be read by third-party tools such as Microsoft Excel™.
While you can extract all the data from your reports (see Manage Data Exports), you’re encouraged to use the Data Explorer only for a limited period, so you will have a shorter report.
To export data from a card
Log in to the Coveo Platform as a member of a group that has been granted at least the following privileges of the Analytics service (see Manage Privileges and Analytics Service):
In the navigation bar on the left, under Analytics, select Reports.
Double-click the report containing the card from which you want to extract data.
Hover over the desired card, and then click
at the top right of the card.
In the Data Explorer panel, at the bottom right, click
to download a CSV file with all the card data.
You can also filter the data in the card you want to export by using the Add filter icon
. | https://docs.coveo.com/en/1751/ | 2020-11-24T04:22:55 | CC-MAIN-2020-50 | 1606141171077.4 | [] | docs.coveo.com |
Groovy-Eclipse is the Eclipse tooling support for the programming language. The Groovy-Eclipse allows you to edit, compile, run, and debug both groovy scripts and classes from the Eclipse SDK.
To contribute to the discussion about Groovy-Eclipse, please join our mailing lists: for plugin users and for those interested in the development of the plugin. Bugs can be raised on our jira server.:
- Go to: Help -> Software Updates
- Change to the Available Software tab
- Click on Add Site
- Paste the update site URL appropriate for your version of Eclipse and click OK
- You should now see a Groovy Update Site entry in the list of update sites. Expand and select the Groovy-Eclipse plugin feature and the JDT Core Patch feature. Optionally, you can choose to include the sources.
- Click Install and follow the prompts
- Restart when asked
- Rejoice! You now have the Groovy-Eclipse plugin installed.
Archived snapshots of the plugin
Archived snapshots of the plugin are available as zip files. You can find them here:: this plugin will only install on Eclipse 3.4.2 or Eclipse 3.5 or 3.5.1..
Building the Update Site
Instructions to come later. If you are interested in doing this, please contact the mailing list..
Open issues
We are hard at work, but there are always new issues being raised.com.atlassian.confluence.macro.MacroExecutionException: The URL filter is not available to you, perhaps it has been deleted or had its permissions changed | http://docs.codehaus.org/pages/viewpage.action?pageId=133922863 | 2014-04-16T08:07:26 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.codehaus.org |
March 4, 1939
My dear Mr. President:
In accordance with the message from you which Steve Early
gave me this morning, I have informed the British Ambassador
that you will receive him at the Whit House tomorrow, Sunday,
at 9:30 p.m.
I am enclosing herewith a secret memorandum, which the Ambassador
left with me, when I saw him a couple of days ago, as well as
a memorandum of my conversation with the Ambassador. I believe
you may wish to read these two papers before you talk with the
Ambassador.
Believe me
Faithfully yours,
The President,
The White House. | http://docs.fdrlibrary.marist.edu/psf/box32/t304j01.html | 2014-04-16T08:31:32 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.fdrlibrary.marist.edu |
- Replication >
- Replica Set Tutorials >
- Replica Set Maintenance Tutorials >
- Change the Size of the Oplog
Change the Size of the Oplog¶
The oplog exists internally as a capped collection, so you cannot modify its size in the course of normal operations. In most cases the default oplog size is an acceptable size; however, in some situations you may need a larger or smaller oplog. For example, you might need to change the oplog size if your applications perform large numbers of multi-updates or deletes in short periods of time.
This tutorial describes how to resize the oplog. For a detailed explanation of oplog sizing, see Oplog Size. For details how oplog size affects delayed members and affects replication lag, see Delayed Replica Set Members.
Overview¶
To change the size of the oplog, you must perform maintenance on each member of the replica set in turn. The procedure requires: stopping the mongod instance and starting as a standalone instance, modifying the oplog size, and restarting the member.
Important
Always start rolling replica set maintenance with the secondaries, and finish with the maintenance on primary member.
Procedure¶
Restart the member in standalone mode.
Tip
Always use rs.stepDown() to force the primary to become a secondary, before stopping the server. This facilitates a more efficient election process.
Recreate the oplog with the new size and with an old oplog entry as a seed.
Restart the mongod instance as a member of the replica set.
Restart a Secondary in Standalone Mode on a Different Port¶
Shut down the mongod instance for one of the non-primary members of your replica set. For example, to shut down, use the db.shutdownServer() method:
db.shutdownServer()
Restart this mongod as a standalone instance running on a different port and without the --replSet parameter. Use a command similar to the following:
mongod --port 37017 --dbpath /srv/mongodb
Create a Backup of the Oplog (Optional)¶
Optionally, backup the existing oplog on the standalone instance, as in the following example:
mongodump --db local --collection 'oplog.rs' --port 37017
Recreate the Oplog with a New Size and a Seed Entry¶
Save the last entry from the oplog. For example, connect to the instance using the mongo shell, and enter the following command to switch to the local database:
use local
In mongo shell scripts you can use the following operation to set the db object:
db = db.getSiblingDB('local')
Use the db.collection.save() method and a sort on reverse natural order to find the last entry and save it to a temporary collection:
db.temp.save( db.oplog.rs.find( { }, { ts: 1, h: 1 } ).sort( {$natural : -1} ).limit(1).next() )
To see this oplog entry, use the following operation:
db.temp.find()
Remove the Existing Oplog Collection¶
Drop the old oplog.rs collection in the local database. Use the following command:
db = db.getSiblingDB('local') db.oplog.rs.drop()
This returns true in the shell.
Create a New Oplog¶
Use the create command to create a new oplog of a different size. Specify the size argument in bytes. A value of 2 * 1024 * 1024 * 1024 will create a new oplog that’s 2 gigabytes:
db.runCommand( { create: "oplog.rs", capped: true, size: (2 * 1024 * 1024 * 1024) } )
Upon success, this command returns the following status:
{ "ok" : 1 }
Insert the Last Entry of the Old Oplog into the New Oplog¶
Insert the previously saved last entry from the old oplog into the new oplog. For example:
db.oplog.rs.save( db.temp.findOne() )
To confirm the entry is in the new oplog, use the following operation:
db.oplog.rs.find()
Restart the Member¶
Restart the mongod as a member of the replica set on its usual port. For example:
db.shutdownServer() mongod --replSet rs0 --dbpath /srv/mongodb
The replica set member will recover and “catch up” before it is eligible for election to primary.
Repeat Process for all Members that may become Primary¶
Repeat this procedure for all members you want to change the size of the oplog. Repeat the procedure for the primary as part of the following step.
Change the Size of the Oplog on the Primary¶
To finish the rolling maintenance operation, step down the primary with the rs.stepDown() method and repeat the oplog resizing procedure above. | http://docs.mongodb.org/manual/tutorial/change-oplog-size/ | 2014-04-16T07:14:28 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.mongodb.org |
{"__v":22,"_id":"54333238a807e208003e72d-10-07T00:22:16.153Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"basic_auth":false,"results":{"codes":[]},"settings":"","try":true,"auth":"never","params":[],"url":""},"isReference":false,"order":2,"body":"The most important part! Details on when and how you get paid your earnings...\n[block:callout]\n{\n \"type\": \"success\",\n \"title\": \"Important: Minimum Activity Requirement\",\n \"body\": \"You must have at least 30-days of activity on your integration, and you must have a positive balance of at least $100 USD after chargebacks, before we send a payout.\\n\\n*Your first payment is issued in the pay period for which you quality by earning more than $100 in total.*\"\n}\n[/block]\n\n[block:callout]\n{\n \"type\": \"info\",\n \"title\": \"Currency\",\n \"body\": \"Currently all amounts are reported in US Dollars in our system and stats. You will be paid in US Dollars.\"\n}\n[/block]\n\n[block:html]\n{\n \"html\": \"<a name='header-requirements'></a>\"\n}\n[/block]\n\n[block:api-header]\n{\n \"type\": \"basic\",\n \"title\": \"Requirements\"\n}\n[/block]\n1. __Validated email__: when you sign up, we send a validation link to your email address. Click it, or email [info:::at:::superrewards.com](mailto:[email protected]) to get a new one sent.\n2. __Tax Forms__: complete and upload the right tax form:\n * US citizens (and companies) must upload a completed Form W9 ([PDF link]())\n * Non-US citizens (individuals) must upload a completed Form W8BEN ([PDF link]())\n * Non-US entities (businesses) must upload a completed Form W8BEN ([PDF link]())\n3. __Name and address__: fill in the name and address we will be paying on your Account Settings page.\n4. __Select payout method__: choose the way you want to be paid (eg. wire transfer)\n5. __Method details__: each payout method has other details required, which you can see below in the [Payment Methods](#header-methods) section.\n \nTax forms and payment info are all handled in your [Account Payment Settings]() page in the [Dashboard](). \n[block:api-header]\n{\n \"type\": \"basic\",\n \"title\": \"Tax Forms Required\"\n}\n[/block]\nYou **must** submit one of the following forms to us to receive payment:\n\n * US citizens (and companies) must upload a completed Form W9 ([PDF link]())\n * Non-US citizens (individuals) must upload a completed Form W8BEN ([PDF link]())\n * Non-US entities (businesses) must upload a completed Form W8BEN-E ([PDF link]())\n[block:api-header]\n{\n \"type\": \"basic\",\n \"title\": \"Instructions: W8BEN for non-US individuals\"\n}\n[/block]\nThis form can be overwhelming. There are detailed instructions at [this link](). Below is what we typically need from you.\n\n**Part I**, fill out all fields. Line 5 and 7 can often be left blank, if applicable.\n\n**Part 2** - Skip\n\n**Part 3** - Sign and date the form.\n[block:api-header]\n{\n \"type\": \"basic\",\n \"title\": \"Instructions: W8BEN-E for non-US entities\"\n}\n[/block]\nThis form can be overwhelming. Detailed instructions can be found at [this link](). Below is what we typically need from you.\n\n**Part I**, fill out all fields.\n\n**Part I #4** - All will be \"Corporation\".\n\n**Part I #5** - All will check \"Active NFFE\" unless you're publicly traded.\n\n**Part I #8/#9** - Enter your Tax ID in Section 9b.\n\nSkip everything else... until **Part XXV**, check to certify it.\n\n**Sign form.** \n[block:api-header]\n{\n \"type\": \"basic\",\n \"title\": \"When do payouts happen?\"\n}\n[/block]\nSuperRewards pays out every two weeks, for the two-week period a month prior. \n\nAs an example, earnings for June 1-15, 2014 are paid out on July 15th, 2014. June 16-30th earnings get paid out July 31st. \n\nPayments are usually processed within 3 days of the end of the period, depending on weekends or public holidays.\n[block:api-header]\n{\n \"type\": \"basic\",\n \"title\": \"Adjustments & Holdbacks\"\n}\n[/block]\nIf there are significant chargebacks or suspicious behavior on an application or account, we reserve the right to hold reserves until leads have been approved by advertising partners. If offer completions are cancelled by our advertisers, or if payments are rejected by our payment partners, you will not be paid on those leads.\n\nIf your app stops generating activity with us, we will hold funds until we're paid on the last activity on your application by our payment providers and advertisers. \n[block:html]\n{\n \"html\": \"<a name='header-methods'></a>\"\n}\n[/block]\n\n[block:api-header]\n{\n \"type\": \"basic\",\n \"title\": \"Payment Methods\"\n}\n[/block]\nWe can transfer your earnings by:\n* Wire transfer\n* ACH (direct deposit) (US only)\n* PayPal\n* Skrill \n\n**Required Details**\nIn addition to the list in the [Requirements](#header-requirements) section above, each method has other information required:\n[block:parameters]\n{\n \"data\": {\n \"h-0\": \"Payout Method\",\n \"h-1\": \"Details Required\",\n \"0-0\": \"Wire Transfer\",\n \"1-0\": \"ACH Transfer\",\n \"2-0\": \"PayPal\",\n \"0-1\": \"_If you don't know any of these, just contact your bank ask them what details you need to receive an incoming wire._\\n* Bank Name\\n* Bank Address\\n* Account Number\\n* IBAN number (if European)\\n* SWIFT Code\\n\\n_Bank address & home address countries must match._\",\n \"1-1\": \"_Requires a US bank account & US home address_\\n* Bank Name\\n* Bank Address\\n* Account Number\\n* Routing Number\",\n \"2-1\": \"* PayPal email address\",\n \"3-0\": \"Skrill\",\n \"3-1\": \"* Skrill email address\"\n },\n \"cols\": 2,\n \"rows\": 4\n}\n[/block]","excerpt":"Everything you need to know about getting money from SuperRewards.","slug":"basics-getting-paid","type":"basic","title":"Basics: Getting Paid"}
Basics: Getting Paid
Everything you need to know about getting money from SuperRewards. | http://docs.superrewards.com/docs/basics-getting-paid | 2017-03-23T02:09:52 | CC-MAIN-2017-13 | 1490218186608.9 | [] | docs.superrewards.com |
Introduction to iCloud
- PDF for offline use
-
- Sample Code:
-
- Related Articles:
-
- Related SDKs:
-
Let us know how you feel about this
0/250
last updated: 2016-06.
Overview
The iCloud storage API in iOS 5 allows applications to save user documents and application-specific data to a central location and access those items from all the user's devices.
There are four types of storage available:
Key-Value storage - to share small amounts of data with your application on a user's other devices.
UIDocument storage - to store documents and other data in the user's iCloud account using a subclass of UIDocument.
CoreData - SQLite database storage.
Individual files and directories - for managing lots of different files directly in the file system.
This document discusses the first two types - Key-Value pairs and UIDocument subclasses - and how to use those features in Xamarin.iOS.
Requirements
- The leatest stable version of Xamarin.iOS
- Xcode 7 and above
Xamarin Studio 5 or Visual Studio 2013 and newer.
Preparing for iCloud development
Applications must be configured to use iCloud both in the Apple Provisioning Portal and the project itself. Before developing for iCloud (or trying out the samples) follow the steps below.
To correctly configure an application to access iCloud:
Find your TeamID - login to developer.apple.com and visit the Member Center > Your Account > Developer Account Summary to get your Team ID (or Individual ID for single developers). It will be a 10 character string ( A93A5CM278 for example) - this forms part of the "container identifier".
Create a new App ID - To create an App ID, follow the steps outlined in the Provisioning for Store Technologies section of the Device Provisioning guide, and be sure to check iCloud as an allowed service:
Create a new Provisioning Profile - To create a Provisioning Profile, follow the steps outlined in the Provisioning for Store Technologies section of the Device Provisioning guide .
Add your "container identifier" to Entitlements.plist - the container
identifier format is TeamID.BundleID, so using
the examples above would result in A93A5CM278.com.xamarin.samples.icloud. You must use your own TeamID. You can also use the
$(TeamIdentifierPrefix) and
$(CFBundleIdentifier) placeholders in Xamarin Studio's Project Options dialog. Add your container
identifier to the sample's Entitlements.plist under the following two keys:
com.apple.developer.ubiquity-kvstore-identifier com.apple.developer.ubiquity-container-identifiers
This will need to be done manually in Visual Studio, where no advanced plist editor is available.
Configure Xamarin Studio project properties - open the sample project's Options to ensure the iOS Application Identifier is set to the correct Bundle ID and the iOS Bundle Signing has the correct Provisioning Profile and Custom Entitlements file selected. This can all be done in Visual Studio under the project Properties pane.
Enable iCloud on your device - go to Settings > iCloud and ensure that the device is logged in. Select and turn on the Documents & Data option.
You must use a device to test iCloud - it will not work on the Simulator. In fact, you really need two or more devices all signed in with the same Apple ID to see iCloud in action.
Xamarin Studio Entitlements Editor
The latest version of Xamarin Studio now includes an editing UI for the Entitlements.plist file. It will automatically add the $(TeamPrefixIdentifier) to the plist (without showing it):
You can optionally enter the $(CFBundleIdentifier) placeholder rather than provide a hardcoded value. Placeholders, if used, are substituted for the correct values during the build.
This will have to be done manually in Visual Studio.
Key-Value Storage
Key-value storage is intended for small amounts of data that a user might like persisted across devices - such as the last page they viewed in a book or magazine. Key-value storage should not be used for backing-up data.
There are some limitations to be aware of when using key-value storage:
Maximum key size - Key names cannot be longer than 64 bytes.
Maximum value size - You cannot store more than 64 kilobytes in a single value.
Maximum key-value store size for an app - Applications can only store up to 64 kilobytes of key-value data in total. Attempts to set keys beyond that limit will fail and the previous value will persist.
Data types - Only basic types like strings, numbers and booleans can be stored.
The iCloudKeyValue example demonstrates how it works. The sample code creates a key named for each device: you can set this key on one device and watch the value get propagated to others. It also creates a key called "Shared" which can be edited on any device - if you edit on many devices at once, iCloud will decide which value "wins" (using a timestamp on the change) and gets propagated.
This screenshot shows the sample in use. When change notifications are received from iCloud they are printed in the scrolling text view at the bottom of the screen and updated in the input fields.
Setting and retrieving data
This code shows how to set a string value.
var store = NSUbiquitousKeyValueStore.DefaultStore; store.SetString("testkey", "VALUE IN THE CLOUD"); // key and value store.Synchronize();
Calling Synchronize ensures the value is persisted to local disk storage only. The synchronization to iCloud happens in the background and cannot be "forced" by application code. With good network connectivity the synchronization will often happen within 5 seconds, however if the network is poor (or disconnected) an update may take much longer.
You can retrieve a value with this code:
var store = NSUbiquitousKeyValueStore.DefaultStore; display.Text = store.GetString("testkey");
The value is retrieved from the local data store - this method does not attempt to contact iCloud servers to get the "latest" value. iCloud will update the local data store according to its own schedule.
Deleting Data
To completely remove a key-value pair, use the Remove method like this:
var store = NSUbiquitousKeyValueStore.DefaultStore; store.Remove("testkey"); store.Synchronize();
Observing Changes
An application can also receive notifications when values are changed by
iCloud by adding an observer to the
NSNotificationCenter.DefaultCenter.
The following code from KeyValueViewController.cs
ViewWillAppear method
shows how to listen for those notifications and create a list of which keys have
been changed:
keyValueNotification = NSNotificationCenter.DefaultCenter.AddObserver ( NSUbiquitousKeyValueStore.DidChangeExternallyNotification, notification => { Console.WriteLine ("Cloud notification received"); NSDictionary userInfo = notification.UserInfo; var reasonNumber = (NSNumber)userInfo.ObjectForKey (NSUbiquitousKeyValueStore.ChangeReasonKey); nint reason = reasonNumber.NIntValue; var changedKeys = (NSArray)userInfo.ObjectForKey (NSUbiquitousKeyValueStore.ChangedKeysKey); var changedKeysList = new List<string> (); for (uint i = 0; i < changedKeys.Count; i++) { var key = changedKeys.GetItem<NSString> (i); // resolve key to a string changedKeysList.Add (key); } // now do something with the list... });
Your code can then take some action with the list of changed keys, such as updating a local copy of them or updating the UI with the new values.
Possible change reasons are: ServerChange (0), InitialSyncChange (1), or QuotaViolationChange (2). You can access the reason and perform different processing if required (for example, you might need to remove some keys as a result of a QuotaViolationChange).
Document Storage
iCloud Document Storage is designed to manage data that is important to your app (and to the user). It can be used to manage files and other data that your app needs to run, while at the same time providing iCloud-based backup and sharing functionality across all the user's devices.
This diagram shows how it all fits together. Each device has data saved on
local storage (the UbiquityContainer) and the operating system's iCloud Daemon
takes care of sending and receiving data in the cloud. All file access to the
UbiquityContainer must be done via FilePresenter/FileCoordinator to prevent
concurrent access. The
UIDocument class implements those for you; this
example shows how to use UIDocument.
The iCloudUIDoc example implements a simple
UIDocument subclass that
contains a single text field. The text is rendered in a
UITextView and
edits are propogated by iCloud to other devices with a notification message
shown in red. The sample code does not deal with more advanced iCloud features
like conflict resolution.
This screenshot shows the sample application - after changing the text and pressing UpdateChangeCount the document is synchronized via iCloud to other devices.
There are five parts to the iCloudUIDoc sample:
Accessing the UbiquityContainer - determine if iCloud is enabled, and if so the path to your application's iCloud storage area.
Creating a UIDocument subclass - create a class to intermediate between iCloud storage and your model objects.
Finding and opening iCloud documents - use
NSFileManagerand
NSPredicateto find iCloud documents and open them.
Displaying iCloud documents - expose properties from your
UIDocumentso that you can interact with UI controls.
Saving iCloud documents - ensure that changes made in the UI are persisted to disk and iCloud.
All iCloud operations run (or should run) asynchronously so that they don't block while waiting for something to happen. You will see three different ways of accomplishing this in the sample:
Threads - in
AppDelegate.FinishedLaunching the initial call to
GetUrlForUbiquityContainer is done on another thread to
prevent blocking the main thread.
NotificationCenter - registering for notifications when
asynchronous operations such as
NSMetadataQuery.StartQuery complete.
Completion Handlers - passing in methods to run on
completion of asynchronous operations like
UIDocument.Open.
Accessing the UbiquityContainer
The first step in using iCloud Document Storage is to determine whether iCloud is enabled, and if so the location of the "ubiquity container" (the directory where iCloud-enabled files are stored on the device).
This code is in the
AppDelegate.FinishedLaunching method of the sample.
// GetUrlForUbiquityContainer is blocking, Apple recommends background thread or your UI will freeze ThreadPool.QueueUserWorkItem (_ => { CheckingForiCloud = true; Console.WriteLine ("Checking for iCloud"); var uburl = NSFileManager.DefaultManager.GetUrlForUbiquityContainer (null); // OR instead of null you can specify "TEAMID.com.your-company.ApplicationName" if (uburl == null) { HasiCloud = false; Console.WriteLine ("Can't find iCloud container, check your provisioning profile and entitlements"); InvokeOnMainThread (() => { var alertController = UIAlertController.Create ("No \uE049 available", "Check your Entitlements.plist, BundleId, TeamId and Provisioning Profile!", UIAlertControllerStyle.Alert); alertController.AddAction (UIAlertAction.Create ("OK", UIAlertActionStyle.Destructive, null)); viewController.PresentViewController (alertController, false, null); }); } else { // iCloud enabled, store the NSURL for later use HasiCloud = true; iCloudUrl = uburl; Console.WriteLine ("yyy Yes iCloud! {0}", uburl.AbsoluteUrl); } CheckingForiCloud = false; });
Although the sample does not do so, Apple recommends calling GetUrlForUbiquityContainer whenever an app comes to the foreground.
Creating a UIDocument Subclass
All iCloud files and directories (ie. anything stored in the UbiquityContainer directory) must be managed using NSFileManager methods, implementing the NSFilePresenter protocol and writing via an NSFileCoordinator. The simplest way to do all of that is not to write it yourself, but subclass UIDocument which does it all for you.
There are only two methods that you must implement in a UIDocument subclass to work with iCloud:
LoadFromContents - passes in the NSData of the file's contents for you to unpack into your model class/es.
ContentsForType - request for you to supply the NSData representation of your model class/es to save to disk (and the Cloud).
This sample code from iCloudUIDoc\MonkeyDocument.cs shows how to implement UIDocument.
public class MonkeyDocument : UIDocument { // the 'model', just a chunk of text in this case; must easily convert to NSData NSString dataModel; // model is wrapped in a nice .NET-friendly property public string DocumentString { get { return dataModel.ToString (); } set { dataModel = new NSString (value); } } public MonkeyDocument (NSUrl url) : base (url) { DocumentString = "(default text)"; } // contents supplied by iCloud to display, update local model and display (via notification) public override bool LoadFromContents (NSObject contents, string typeName, out NSError outError) { outError = null; Console.WriteLine ("LoadFromContents({0})", typeName); if (contents != null) dataModel = NSString.FromData ((NSData)contents, NSStringEncoding.UTF8); // LoadFromContents called when an update occurs NSNotificationCenter.DefaultCenter.PostNotificationName ("monkeyDocumentModified", this); return true; } // return contents for iCloud to save (from the local model) public override NSObject ContentsForType (string typeName, out NSError outError) { outError = null; Console.WriteLine ("ContentsForType({0})", typeName); Console.WriteLine ("DocumentText:{0}",dataModel); NSData docData = dataModel.Encode (NSStringEncoding.UTF8); return docData; } }
The data model in this case is very simple - a single text field. Your data model can be as complex as required, such as an Xml document or binary data. The primary role of the UIDocument implementation is to translate between your model classes and an NSData representation that can be saved/loaded on disk.
Finding and Opening iCloud Documents
The sample app only deals with a single file - test.txt - so the code in
AppDelegate.cs creates an
NSPredicate and
NSMetadataQuery to look specifically
for that filename. The
NSMetadataQuery runs asynchronously and sends a
notification when it finishes.
DidFinishGathering gets called by the
notification observer, stops the query and calls LoadDocument, which uses the
UIDocument.Open method with a completion handler to attempt to load the file and
display it in a
MonkeyDocumentViewController.
string monkeyDocFilename = "test.txt"; void FindDocument () { Console.WriteLine ("FindDocument"); query = new NSMetadataQuery { SearchScopes = new NSObject [] { NSMetadataQuery.UbiquitousDocumentsScope } }; var pred = NSPredicate.FromFormat ("%K == %@", new NSObject[] { NSMetadataQuery.ItemFSNameKey, new NSString (MonkeyDocFilename) }); Console.WriteLine ("Predicate:{0}", pred.PredicateFormat); query.Predicate = pred; NSNotificationCenter.DefaultCenter.AddObserver ( this, new Selector ("queryDidFinishGathering:"), NSMetadataQuery.DidFinishGatheringNotification, query ); query.StartQuery (); } [Export ("queryDidFinishGathering:")] void DidFinishGathering (NSNotification notification) { Console.WriteLine ("DidFinishGathering"); var metadataQuery = (NSMetadataQuery)notification.Object; metadataQuery.DisableUpdates (); metadataQuery.StopQuery (); NSNotificationCenter.DefaultCenter.RemoveObserver (this, NSMetadataQuery.DidFinishGatheringNotification, metadataQuery); LoadDocument (metadataQuery); } void LoadDocument (NSMetadataQuery metadataQuery) { Console.WriteLine ("LoadDocument"); if (metadataQuery.ResultCount == 1) { var item = (NSMetadataItem)metadataQuery.ResultAtIndex (0); var url = (NSUrl)item.ValueForAttribute (NSMetadataQuery.ItemURLKey); doc = new MonkeyDocument (url); doc.Open (success => { if (success) { Console.WriteLine ("iCloud document opened"); Console.WriteLine (" -- {0}", doc.DocumentString); viewController.DisplayDocument (doc); } else { Console.WriteLine ("failed to open iCloud document"); } }); } // TODO: if no document, we need to create one }
Displaying iCloud Documents
Displaying a UIDocument shouldn't be any different to any other model class - properties are displayed in UI controls, possibly edited by the user and then written back to the model.
In the example iCloudUIDoc\MonkeyDocumentViewController.cs displays the
MonkeyDocument text in a
UITextView.
ViewDidLoad listens for the notification
sent in the
MonkeyDocument.LoadFromContents method.
LoadFromContents is called
when iCloud has new data for the file, so that notification indicates that the
document has been updated.
NSNotificationCenter.DefaultCenter.AddObserver (this, new Selector ("dataReloaded:"), new NSString ("monkeyDocumentModified"), null );
The sample code notification handler calls a method to update the UI - in this case without any conflict detection or resolution.
[Export ("dataReloaded:")] void DataReloaded (NSNotification notification) { doc = (MonkeyDocument)notification.Object; // we just overwrite whatever was being typed, no conflict resolution for now docText.Text = doc.DocumentString; }
Saving iCloud Documents
To add a UIDocument to iCloud you can call
UIDocument.Save directly (for new
documents only) or move an existing file using
NSFileManager.DefaultManager.SetUbiquitious. The example code creates a new
document directly in the ubiquity container with this code (there are two
completion handlers here, one for the
Save operation and another for the
Open):
var docsFolder = Path.Combine (iCloudUrl.Path, "Documents"); // NOTE: Documents folder is user-accessible in Settings var docPath = Path.Combine (docsFolder, MonkeyDocFilename); var ubiq = new NSUrl (docPath, false); var monkeyDoc = new MonkeyDocument (ubiq); monkeyDoc.Save (monkeyDoc.FileUrl, UIDocumentSaveOperation.ForCreating, saveSuccess => { Console.WriteLine ("Save completion:" + saveSuccess); if (saveSuccess) { monkeyDoc.Open (openSuccess => { Console.WriteLine ("Open completion:" + openSuccess); if (openSuccess) { Console.WriteLine ("new document for iCloud"); Console.WriteLine (" == " + monkeyDoc.DocumentString); viewController.DisplayDocument (monkeyDoc); } else { Console.WriteLine ("couldn't open"); } }); } else { Console.WriteLine ("couldn't save"); }
Subsequent changes to the document are not "saved" directly, instead we
tell the
UIDocument that it has changed with
UpdateChangeCount, and it will
automatically schedule a save to disk operation:
doc.UpdateChangeCount (UIDocumentChangeKind.Done);
Managing iCloud Documents
Users can manage iCloud documents in the Documents directory of the "ubiquity container" outside of your application via Settings; they can view the file list and swipe to delete. Application code should be able to handle the situation where documents are deleted by the user. Do not store internal application data in the Documents directory.
Users will also receive different warnings when they attempt to remove an iCloud-enabled application from their device, to inform them of the status of iCloud documents related to that application.
iCloud Backup
While backing up to iCloud isn't a feature that is directly accessed by developers, the way you design your application can affect the user experience. Apple provides iOS Data Storage Guidelines for developers to follow in their iOS applications.
The most important consideration is whether your app stores large files that are not user-generated (for example, a magazine reader application that stores hundred-plus megabytes of content per issue). Apple prefers that you do not store this sort of data where it will be backed-up to iCloud and unnecessarily fill the user's iCloud quota.
Applications that store large amounts of data like this should either store
it in one of the user directories that is not backed-up (eg. Caches or tmp) or
use
NSFileManager.SetSkipBackupAttribute to apply a flag to those files so that
iCloud ignores them during backup operations.
Summary
This article introduced the new iCloud feature included in iOS 5. It examined the steps required to configure your project to use iCloud and then provided examples of how to implement iCloud features.
The key-value storage example demonstrated how iCloud can be used to store a small amount of data similar to the way NSUserPreferences are stored. The UIDocument example showed how more complex data can be stored and synchronized across multiple devices via iCloud.
Finally it included a brief discussion on how the addition of iCloud Backup should influence your application. | https://docs.mono-android.net/guides/ios/platform_features/introduction_to_icloud/ | 2017-03-23T02:19:15 | CC-MAIN-2017-13 | 1490218186608.9 | [] | docs.mono-android.net |
100% garanti
Document: Preaching, Conversion, Ministering and Struggling against Hussites: The Mendicants' Missionary Activities and Strategies in Moldavia from the Thirteenth to the First Half of the Fifteenth Century. 17 pagesExtrait: This study aims at a new analysis of the edited sources concerning the Mendicants in Moldavia from the perspective of the new approaches of the mendicant orders' historiography. I will focus on the Mendicants' missionary goals and their strategies used to achieve them, in Moldavia from the thirteenth to the first half of the fifteenth century . Both Franciscans and Dominicans considered Moldavia as a country of mission, which means that they were interested in conversions, baptism and ministering to the people.
[...] The inquiry into these sources draws an interesting picture of the activities and strategies of the Mendicants in Moldavia in the first half of the fifteenth century. The Hussites were a problem for the Catholic Church throughout all Central and Eastern Europe. The papacy tried to counteract the Hussites' activities using the help of the Mendicant Orders and especially of the Observant Franciscans from the vicariate of Bosnia. Moldavia was an important point on the map of the Franciscans' activities in the region For the Franciscan activities against the Hussites, G. [...]
[...] The Dominicans applied this strategy in their attempts to convert the Cumans. They were successful, for example, when they convinced a Cuman chieftain, called Bortz, to want to be baptized The news about Bortz' conversion as a result of the Dominicans' preaching activities can be inferred from Pope Gregory IX's letter to the archbishop of Ezstergom. Theiner, Vetera monumenta historica Hungariam, doc p Bortz sent his son Burch together with 12 of his followers to the archbishop of Ezstergom and promised that if the archbishop went into Cumania, he, together with 2,000 men, would also receive baptism Albericus de Trois Fontaines, published in Monumenta Germaniae Historica, ed. [...]
[...] Rosetti, ?Despre unguri ºi episcopiile catolice,? p nota The appointed bishop came into Moldavia and tried to organize his bishopric. His attitude aroused a strong rivalry between him and the missionaries of the two mendicant orders 1.The inconveniences of a strategy: the rivalry between the Mendicants and the bishops of Baia The rivalry between the Mendicants and the local bishops was a general pattern all over Europe In the same years a conflict started between the bishop of Lviv and the Mendicants. [...]
[...] A., doc p Furthermore, in 1384 the Dominicans received from the voievode the privilege to gather the income of the custom duties of the town of Siret Ibid. Quantenus libram seu pensatorium quod est in civitate nostra predicta Cerethensi, praedictis fratribus praedicatoribus, dictae ecclesiae deservientibus, simpliciter dare et concedere dignaremur . The conversion of the local ruler was still considered a valid option by the Mendicants as can be inferred from the statement of archbishop of Sultanieh who wrote in his book, Liber de notitia orbis about the conversion made by a Dominican of a ruler and his mother called Margaret Dominus ipsorum aliquando conversus fuit ad fidem nostram Catholicam et specialiter mater sua domina Margarita per unum fratrem Predicatorem vicarium generalem illarum partium. [...]
[...] Moisescu, Catolicismul în Moldova, pp. 69-70. Regarding the papal interest in the conversion of Latcu, ªerban Papacostea argues that the papal policy towards Moldavia was in contradiction with the Hungarian one because giving an independent bishopric meant the recognition of the independence of the state. Papacostea, Geneza statului, p According to A. A. Vasiliev, the papal policy was related to its attempts to unite the churches after the conversion of the Emperor of Constantinople to Catholicism in 1369. Vasiliev, viaggio,? pp. [...]
Enter the password to open this PDF file:
-
Consultez plus de 91303 études en illimité sans engagement de durée. Nos formules d'abonnement | https://www.docs-en-stock.com/histoire-et-geographie/mendicants-missionary-activities-strategies-in-moldavia-from-xiii-th-first-85532.html | 2017-03-23T02:36:37 | CC-MAIN-2017-13 | 1490218186608.9 | [] | www.docs-en-stock.com |
Translating the theme
Contents
Instructions
Prerequisites
Make sure you have a copy of the theme on your computer.
- You would need to have a site (either in localhost or on your server) with the theme installed so that you can test the translation.
- Configure your WordPress site to run in the language you are translating the theme for. For more information, see Installing WordPress in Your Language.
- Go to the POEdit website and download the latest version of POEdit. This free, open-source software will make the translation process a whole lot easier.
- Install POEdit, and run the program.
Creating the translation file
If a translation is already available for your language, skip this section and jump straight to the Editing existing translation section.
- In POEdit, click File > New catalog from POT file... . Then, browse to the
graphene/languagesfolder and select the
graphene.potfile, and click Open.
- A Settings window should pop up in POEdit. Fill up the Project Info tab with the appropriate info. You can ignore the other two tabs (Paths and Keywords), as these have already been setup for you in that
graphene.potfile. Then click OK.
- Another pop-up should appear asking you where to save the file. Save it in the
graphene/languagesfolder, and give it the specific name for the language you're translating the theme into. WordPress uses specific strings to identify languages, which you can find out in the WordPress in Your Language Codex page. For example, French language translation file should be named
fr_FR.po, whereas the German translation file should be named
de_DE.po.
- You're now all set up and can begin translating! See the screenshot at the end of this page for a quick view of what's where.
Note that you don't have to translate all the strings at one go. You can just save at any point in the translation process, and pick up where you left afterwards.
Editing existing translation
If there's already a translation available for your language, it's much better to continue with editing the existing language file rather than creating a new one. This way, you can just translate the new strings that haven't been translated, and not re-translate all the other strings.
- Open up the existing language file in POEdit. Note that you can only open the
.pofile with POEdit. The
.mofile is generated by POEdit from the
.pofile. It is a machine object file which is not human-readable.
- In POEdit, click Catalog > Update from POT file... . Then, browse to the
graphene/languagesfolder and select the
graphene.potfile, and click Open. This will update your translation file with the latest strings from the theme.
- You're now all set up and can continue the translation! See the screenshot at the end of this page for a quick view of what's where.
Note that you don't have to translate all the strings at one go. You can just save at any point in the translation process, and pick up where you left afterwards. | http://docs.graphene-theme.com/Translating_the_theme | 2017-03-23T02:10:08 | CC-MAIN-2017-13 | 1490218186608.9 | [] | docs.graphene-theme.com |
SeamFramework.orgCommunity Documentation
Interceptors are a powerful way to capture and separate concerns which are orthogonal to the type system. Any interceptor is able to intercept invocations of any Java type. This makes them perfect for solving technical concerns such as transaction management and security.. This makes decorators a perfect tool for modeling some kinds of business concerns. It also means that a decorator doesn't have the generality of an interceptor. Decorators aren't able to solve technical concerns that cut across many disparate types.
Suppose we have an interface that represents accounts:
public interface Account {
public BigDecimal getBalance();
public User getOwner();
public void withdraw(BigDecimal amount);
public void deposit(BigDecimal amount);
}
Several different Web Beans in our system implement the
Account interface. However, we have a common legal
requirement that, for any kind of account, large transactions must be
recorded by the system in a special log. This is a perfect job for a
decorator.
A decorator is a simple Web Bean that implements the type it
decorates and is annotated
@Decorator.
@Decorator
public abstract class LargeTransactionDecorator
implements Account {
@Decor) );
}
}
}
Unlike other simple Web Beans, a decorator may be an abstract class. If there's nothing special the decorator needs to do for a particular method of the decorated interface, you don't need to implement that method.
All decorators have a delegate attribute. The type and binding types of the delegate attribute determine which Web Beans the decorator is bound to. The delegate attribute type must implement or extend all interfaces implemented by the decorator.
This delegate attribute specifies that the decorator is bound to
all Web Beans that implement
Account:
@Decorates Account account;
A delegate attribute may specify a binding annotation. Then the decorator will only be bound to Web Beans with the same binding.
@Decorates @Foreign Account account;
A decorator is bound to any Web Bean which:
has the type of the delegate attribute as an API type, and
has all binding types that are declared by the delegate attribute.
The decorator may invoke the delegate attribute, which has much the same
effect as calling
InvocationContext.proceed() from an
interceptor.
We need to enable our decorator in
web-beans.xml.
<Decorators>
<myapp:LargeTransactionDecorator/>
</Decorators>.
Interceptors for a method are called before decorators that apply to that method. | http://docs.jboss.org/webbeans/reference/1.0.0.PREVIEW1/en-US/html/decorators.html | 2014-12-18T17:50:08 | CC-MAIN-2014-52 | 1418802767301.77 | [] | docs.jboss.org |
: a set of guidelines for IzPack developers and contributors.
- Executing and writing tests : How to use test tools to write efficient tests. | http://docs.codehaus.org/pages/viewpage.action?pageId=143228986 | 2014-12-18T17:35:01 | CC-MAIN-2014-52 | 1418802767301.77 | [] | docs.codehaus.org |
i. Abstract
OGC® Catalogue Services support the ability to publish and search collections of descriptive information (metadata records) for geospatial data, services, and related information. Metadata in catalogues represent resource characteristics that can be queried and presented for evaluation and further processing by both humans and software. Catalogue services are required to support the discovery and binding to registered information resources within an information community.
This part of the Catalogue Services standard describes the common architecture for OGC Catalogue Services. This document abstractly specifies the interfaces between clients and catalogue services, through the presentation of abstract models. This common architecture is Distributed Computing Platform neutral and uses UML notation. Separate (Part) documents specify the protocol bindings for these Catalogue services, which build upon this document, for the HTTP (or CSW) and OpenSearch protocol bindings.
An Abstract Conformance Test Suite is not included in this document. Such Suites shall be developed by protocol bindings and Application Profiles (see 8.5, ISO/IEC TR 10000-2:1998) that realize the conformance classes listed herein. An application profile consists of a set of metadata elements, policies, and guidelines defined for a particular application[1].
OGC document number 14-014 – HTTP Protocol Binding – Abstract Test Suite is available to address conformance with the provisions of OGC document number 12-176r2 – HTTP Protocol Binding. All annexes to this document are informative.
ii. Keywords
The following are keywords to be used by search engines and document catalogues.
OGC Catalogue Services, metadata, geospatial data, geospatial services, search, discovery, abstract model, general model, HTTP, CSW, OpenSearch, Abstract Conformance Test Suite, ogcdoc, OGC document, asynchronous, catalogue, CQL, client, csw:Record, distributed, Dublin Core, federated, filter, GetCapabilities, GetDomain, GetRecords, GetRecordById, Harvest, http, https, KVP, metadata, record, request, resource, response, server, schema, spatial, temporal, Transaction, UnHarvest, XML, XML-Schema.
iii. Preface
This document is one part of the OGC® Catalogue Services version 3.0 Implementation Standard. Unlike previous versions, Catalogue 3.0 is now divided in multiple parts, with this part specifying the abstract model and another to describe the HTTP protocol binding known as Catalogue Service for the Web (CSW).
This version of the Catalogue Standard has been significantly improved, largely based on change requests submitted by both Open Geospatial Consortium (OGC) members and the public. The changes made in this version relative to version 2.0.2 (OGC document 07-006r1) are summarized in Annex B.):
-
- con terra GmbH
- National Research Council of Italy (CNR)
- Cubewerx Inc.
- Intergraph Corporation
- Joint Research Centre (JRC), European Commission
- U.S. Geological Survey
Earlier versions of this Standard were submitted to the OGC by the following organizations:
-
- BAE SYSTEMS Mission Solutions (formerly Marconi Integrated Systems, Inc.)
- Blue Angel Technologies, Inc.
- Environmental Systems Research Institute (ESRI)
- Geomatics Canada (Canada Centre for Remote Sensing (CCRS)
- Intergraph Corporation
- MITRE
- Oracle Corporation
- U.S. Federal Geographic Data Committee (FGDC)
- U.S. National Aeronautics and Space Administration (NASA)
- U.S. National Imagery and Mapping Agency (NIMA)
iv. Submitters
All questions regarding this submission should be directed to the editor or the submitters:
1. Scope
This document abstractly.
2. Conformance
Conformance to the mandatory catalogue service abstract interfaces is described in section 8. It is the requirement of protocol-specific bindings and application profiles to provide concrete tests and validation in conformance with these abstract conformance classes. Test data and queries may be included in Application Profiles associated with this abstract model and with specific protocol bindings.
3. Normative References
The following normative documents contain provisions that, through referenced in this text, constitute provisions of this document. For dated references, subsequent amendments to, or revisions of, any of these publications do not apply. For undated references, the latest edition of the normative document referred to applies.
- IETF RFC 2045 (November 1996), Multipurpose Internet Mail Extensions (MIME) Part One: Format of Internet Message Bodies, Freed, N. and Borenstein N., eds.,
- IETF RFC 2141 (May 1997), URN Syntax, R. Moats,
- IETF RFC 2396 (August 1998), Uniform Resource Identifiers (URI): Generic Syntax, Berners-Lee, T., Fielding, N., and Masinter, L., eds.,
- IANA, Internet Assigned Numbers Authority, MIME Media Types, available at
- ISO/IEC 8825:1990, Information technology – Open Systems Interconnection – Specification of Basic Encoding Rules for Abstract Syntax Notation One (ASN.1)
- ISO/IEC TR 10000-1:1998. Information Technology – Framework and taxonomy of International Standardised Profiles – Part 1: General principles and documentation framework. Technical Report, JTC 1. Fourth edition, Available [online]:.
- ISO/IEC 10746-2:1996. Information Technology – Open Distributed Processing – Reference Model: Foundations. Common text with ITU-T Recommendation X.902, Available [online]:.
- ISO 8601:2000(E), Data elements and interchange formats - Information interchange - Representation of dates and times
- ISO 19101:2002, Geographic information – Reference model
- ISO 19103 (DTS), Geographic information – Conceptual schema language, (Draft Technical Specification)
- ISO 19106:2003, Geographic Information – Profiles
- ISO 19108:2002, Geographic information – Temporal schema
- ISO 19109:2002 (DIS), Geographic information – Rules for application schema
- ISO 19110:2001 (DIS), Geographic information – Methodology for feature cataloguing
- ISO 19113:2002, Geographic information – Quality principles
- ISO 19114:2001, (DIS) Geographic information – Quality evaluation procedures
- ISO 19118:2002, (DIS) Geographic information – Encoding
- ISO/IEC 14977:1996, Information technology – Syntactic metalanguage – BNF
- ISO 19115:2003, Geographic Information – Metadata
- ISO 19119:2005, Geographic Information – Services
- ISO/TS 19139:2007, Geographic Information – Metadata -Implementation Specification
- OASIS/ebXML Registry Services Specificationv2.5
- OGC 99-113, OGC Abstract Specification Topic 13: Catalogue Services
- OGC 02-112, OGC Abstract Specification Topic 12: OpenGIS Service Architecture
- OGC 09-026r1, OGC Filter Encoding 2.0 Encoding Standard,
- OGC 06-121r9,OGC Web Service Common Implementation Specification, Version 2.0.0
- OMG UML, Unified Modeling Language, Version 1.3, The Object Management Group (OMG):
- OGC 12-176r2, OGC® Catalogue Services specification – HTTP protocol binding (v3.0.0) client
software component that can invoke an operation from a server
- 4.2 data clearinghouse
collection of institutions providing digital data, which can be searched through a single interface using a common metadata standard [ISO 19115]
- 4.3 data level
stratum within a set of layered levels in which data is recorded that conforms to definitions of types found at the application model level [ISO 19101]
- 4.4 dataset series
collection of datasets sharing the same product specification [ISO 19113, ISO 19114, ISO 19115]
- 4.5 feature catalogue
catalogue containing definitions and descriptions of the feature types, feature attributes, and feature relationships occurring in one or more sets of geographic data, together with any feature operations that may be applied [ISO 19101, ISO 19110]
- 4.6 geographic dataset
dataset with a spatial aspect [ISO 19115]
- 4.7 geographic information
information concerning phenomena implicitly or explicitly associated with a location relative to the Earth [ISO 19128 draft]
- 4.8 identifier
a character string that may be composed of numbers and characters that is exchanged between the client and the server with respect to a specific identity of a resource
- 4.9 interface
named set of operations that characterize the behaviour of an entity [ISO 19119]
- 4.10 metadata dataset
metadata describing a specific dataset [ISO 19101]
- 4.11 metadata entity
group of metadata elements and other metadata entities describing the same aspect of data
NOTE 1 A metadata entity may contain one or more metadata entities.
NOTE 2 A metadata entity is equivalent to a class in UML terminology [ISO 19115].
- 4.12 metadata schema
conceptual schema describing metadata
NOTE ISO 19115 describes a standard for a metadata schema. [ISO 19101]
- 4.13 metadata section
subset of metadata that defines a collection of related metadata entities and elements [ISO 19115]
- 4.14 operation
specification of a transformation or query that an object may be called to execute [ISO 19119]
- 4.15 parameter
variable whose name and value are included in an operation request or response
- 4.16 profile
set of one or more base standards and - where applicable - the identification of chosen clauses, classes, subsets, options and parameters of those base standards that are necessary for accomplishing a particular function [ISO 19101, ISO 19106]
- 4.17 qualified name
name that is prefixed with its naming context
EXAMPLE The qualified name for the road no attribute in class Road defined in the Roadmap schema is RoadMap.Road.road_no. [ISO 19118]
- 4.18 request
invocation of an operation by a client
- 4.19 response
result of an operation, returned from a server to a client
- 4.20 resource
an object or artefact that is described by a record in the information model of a catalogue
- 4.21 schema
formal description of a model [ISO 19101, ISO 19103, ISO 19109, ISO 19118]
- 4.22 server
service instance
a particular instance of a service[ISO 19119 edited]
- 4.23 service
distinct part of the functionality that is provided by an entity through interfaces [ISO 19119]
capability which a service provider entity makes available to a service user entity at the interface between those entities [ISO 19104 terms repository]
- 4.24 service interface
shared boundary between an automated system or human being and another automated system or human being [ISO 19101]
- 4.25 service metadata
metadata describing the operations and geographic informationavailable at a server[ISO 19128 draft]
- 4.26 state
condition that persists for aperiod
NOTE The value of a particular feature attribute describes a condition of the feature [ISO 19108].
- 4.27 transfer protocol
common set of rules for defining interactions between distributed systems [ISO 19118]
- 4.28 version
version of an Implementation Specification (document) and XML Schemas to which the requested operation conforms
NOTE An OWS Implementation Specification version may specify XML Schemas against which an XML encoded operation request or response shall conform and should be validated.
5. Conventions
5.1 Symbols (and abbreviated terms)
All symbols used in this document are either:
- Common mathematical symbols; or
- UML 2 (Unified Modeling Language) as defined by OMG and accepted as a publicly available standard (PAS) by ISO in its earlier 1.3 version.
In this document the following abbreviations and acronyms are used or introduced:
- BNF
- Baukus Naur Form
- CSW
- Catalogue Services for the Web
- HTTP
- Hypertext Transfer Protocol
- ISO
- International Organization for Standardization
- MIME
- Multipurpose Internet Mail Extensions
- OGC
- Open Geospatial Consortium, also referred to as OGC®
- UML
- Unified Modeling Language
- XML
- Extensible Markup Language
5.2 UML notation
All UML diagrams in this document follow the guidance as documented in OGC OWS Common 2.0 section 5.2.
5.3 XML Schema
The following notations are used in XML Schema fragment presented in this document:
- Brackets ([]) are used to denote constructs that can be optionally specified. In the following example:
<xsd:element name=“MyElement” minOccurs=“0” [maxOccurs=“1”]>
the brackets around maxOccurs=“1” mean that this construct is optional and can be omitted
5.4 URN notation
All requirements listed in this document are relative to the root URL: . Wherever there is a stated requirement and the work “req” is shown, “req” can be replaced with to define the complete requirement URL.
6. Catalogue abstract information model
6.1 Introduction
The abstract information model specifies a BNF grammar for a minimal query language, a set of core queryable[2] attributes (names, definitions, conceptual datatypes), and a common record format that defines the minimal set of elements that should be returned in the brief and summary element sets.
The geospatial community is very broad and works in many different operational environments, as shown in the information discovery continuum in Figure 1 - Information discovery continuum. On one extreme there are tightly coupled systems dedicated to well-defined functions in a tightly controlled environment. At the other extreme are Web based services that know nothing about the client. This document provides a specification that is applicable to the full range of catalogue operating environments.
6.2 Query language support
6.2.1 Introduction. This flexibility is provided by the query operation that contains the parameters needed to select the query result presentation style and to provide a query expression that includes the actual query with an identification of the query language used. The query operation, query expression, and other related operations are further discussed in Clause 7.2.4.
The interoperability goal is supported by the specification of a minimal abstract query (predicate) language, which shall be supported by all compliant OGC Catalogue Services. This query language supports Boolean queries, text matching operations, temporal data types, and geospatial operators. The minimal query language syntax is based on the SQL WHERE clause in the SQL SELECT statement. The OGC Filter Specification is an implementation of a query language that is transformable to the OGC Catalogue Common Query Language (OGC CommonQL).
This minimal query language assists the consumer in the discovery of datasets of interest at all sites supporting the OGC Catalogue Services. The ability to specify alternative query languages allows for evolution and higher levels of interoperability among more tightly coupled communities of Catalogue Service Providers and Consumers.
6.2.2 OGC Catalogue Common Query Language (OGC CommonQL)
This sub-clause defines the OGC_Catalogue Common Query Language (OGC CommonQL) (BNF to be found in 9). OGC_CommonQL is the primary query language to be supported by multiple OGC Catalogue Service bindings in order to support search interoperability.
Assumptions made during the development of OGC CommonQL:
- The query will have syntax similar to the SQL “Where Clause.”
- The expressiveness of the query will not require extensions to various current query systems used in geospatial catalogue queries other than the implementation of some geo operators.
- The query language is extensible.
- OGC CommonQL supports both tight and loose queries. A tight query is defined for the case when a catalogue doesn’t support an attribute/column specified in the query, no entity/row can match the query and the null set is returned. In a loose query, if an attribute is undefined, it is assumed to match.
6.2.3 Extending the OGC CommonQL
The OGC CommonQL BNF can be extended by adding new predicates, operations, and datatypes. The following discussion is an example of extending the BNF to include a CLASSIFIED-AS operator using the patterns identified in OASIS/ebXML Registry Services Specification v2.5. This extension could appear in a protocol binding or an Application Profile.
This standard makes no assumptions about how taxonomies are maintained in a catalogue, or how records are classified according to those taxonomies. Instead, this specification defines a routine, CLASSIFIED-AS, in order to support classification queries based on taxonomies.
The CLASSIFIED-AS routine takes three arguments. The first argument is the abstract entry point whose classification is being checked. The second argument is the key name string that represents a path expression in the taxonomy. The last argument is the key value string that represents the corresponding path expression containing key values that are the targets of the query. In both cases, the first element of the path expression for the key name argument and key value arguments shall be the name of the taxonomy being used. The normal wildcard matching characters, ‘_’ for a single character and ‘%’ for zero or more characters, may be used in the key value expression which is the last argument of the CLASSIFIED_AS routine.
The following set of productions defines the CLASSIFIED-AS routine.
/* The following example: */ /* */ /* RECORD CLASSIFIED AS CLASSIFICATIONSCHEME=’GeoClass’ */ /* =’/GeoClass/North America/%/Ontario’ */ /* */ /* Will find all records in all the Ontario’s in North America. */
The following are the required BNF specializations:
<classop argument list> ::= <left paren> <entry_point> <comma> <Classification Scheme> <comma><Classification Node> <right paren> <entry_point> ::= <identifier> <Classification Scheme> ::= <identifier> <classop name> ::= CLASSIFIED_AS <Classification Node> ::= <identifier> | <solidus><path element>[<solidus><path element>]… <path element> ::= <character pattern> <routine invocation> ::= | <geoop name><georoutine argument list> | <relgeoop name><relgeoop argument list> | <routine name><argument list> | <classop><classop argument list>
Consider the following example:
CLASSIFIED_AS(’RECORD’, ‘GeoClass’, ‘GeoClass/NorthAmerica/%/Ontario’)
In this example, we are searching records classified according to the GeoClass taxonomy. Specifically, we are looking for all catalogue records classified as Continent=NorthAmerica, Country=any country and State=Ontario. Notice how the wildcard character ‘%’ is used to search for any Country node.
Here is the same example encoded using XML:
<ogc:Filter xmlns:ogc=“”> <ogc:ClassifiedAs> <ogc:TypeName>csw:Record</ogc:TypeName> <ogc:ClassificationScheme>GetClass</ogc:ClassificationScheme> <ogc:ClassificationNode>/GeoClass/NorthAmerica/%/Ontario </ogc:ClassificationNode> </ogc:ClassifiedAs> </ogc:Filter>
In order for catalogue clients to be able to determine which taxonomies are available, a catalogue implementation should advertise the list of available taxonomies in its capabilities document. If a query is executed against a non-existent taxonomy, then an exception should be raised.
6.2.4 Query language realization
Many OGC service operations have the requirement to pass and process a query as a structure to perform a request. There are several query languages and messaging mechanisms identified within OGC standards. Application Profiles should be explicit about the selected query languages and any features peculiar to a scope of application. The following items should be addressed in the preparation of an Application Profile with respect to query language support.
Support for “abstract” query against well-known queryable entry points (e.g. OGC Core). Some standards promote or require the exposure of well-known field-like objects as common search targets (queryables), allowing interrogation of a service without prior negotiation on information content. The mandatory queryable attributes which shall be recognized by all OGC Catalogue Services are discussed in Subclause 6.3.2.
Selection of a query language. Some standards describe one or more query languages that can be supported. Identify the name and version of required query language(s) anticipated by this Application Profile for use.
Supported data types (e.g. character, integer, coordinate, date, geometry) and operator types (e.g. inequality, proximity, partial string, spatial, temporal). Query languages may be restricted in their implementation or extended with functions not described in the base standard. This narrative should provide lists or reference documents with the enumerated data types and operator types required by this Application Profile. In addition, any description of special techniques (e.g. supporting joins or associations) that are expected by an Application Profile should be described.
6.3 Core catalogue schema
6.3.1 Introduction
Metadata structures, relationships, and definitions – known as conceptual schemas – exist for multiple information communities. For the purposes of interchange of information within an information community, a metadata schema may be defined that provides a common vocabulary which supports search, retrieval, display, and association between the description and the object being described. Although this standard does not require the use of a specific schema, the adoption of a given schema within an information-sharing community ensures the ability to communicate and discover information.
The geomatics standardization activity in ISO Technical Committee 211 include formal schemas for geospatial metadata that are intended to apply to all types of information. These metadata standards, ISO 19115:2003[4] and ISO 19115-1:2014[5] include proposals for core (discovery) metadata elements in common use in the geospatial community. ISO/TS 19139:2007 defines a formal encoding and structure of ISO 19115:2003 metadata for exchange. Where a catalogue service advertises such application schemas, catalogues that handle geographic dataset descriptions should conform to published metadata standards and encodings, e.g. ISO 19115:2003, and support XML encoding per ISO 19139 or profiles thereof. Service metadata elements should be consistent with ISO 19119[6] or 19115:2014[7].
6.3.2 Core queryable properties
The goal of defining core queryable properties is query interoperability among catalogues that implement the same protocol binding and query compatibility among catalogues that implement different protocol bindings, perhaps through the use of “bridges” or protocol adapters. Defining a set of core queryable properties also enables simple cross-profile discovery, where the same queries can be executed against any catalogue service without modification and without detailed knowledge of the catalogue’s information model. This requires a set of general metadata properties that can be used to characterize any resource.
Tables 1, 2 and 3 define a set of abstract queryables that binding protocols shall realize in their core queryable schemas. Binding protocols shall further specify a record identifier (ID) based on the native platform ID types. Binding protocols shall also specify how the values of core queryable properties shall be encoded in service requests. Binding protocols may choose to use a single comma-separated list for compound datatypes or may label each sub-element for clarity and order flexibility. Application profiles may further modify or redefine the realization of the core queryables and how their values are encoded.
All realizations of the core queryable properties in a binding protocol shall include all the properties listed in Tables 1, 2, or 3 even if the underlying information model does not include information that can be mapped into all properties. Core properties that cannot have a value assigned to them because the information is not available in the information model of the catalogue shall be considered as having a value of NULL.
The properties “Title”, “Identifier” and the pseudo-property “AnyText” shall be supported as mandatory queryables in all implementations. Protocol bindings shall describe mechanisms to identify and elaborate on the queryables and operations supported by a given catalogue service.
6.3.3 Core returnable properties
A set of core properties returned from a metadata search is encouraged to permit the minimal implementation of a catalogue service independent of a companion application profile, and to permit the use of metadata returned from different systems and protocol bindings. The core metadata is returned as a request for the Common Element Set. The Common Element Set is a new group of public metadata elements, expressed using the nomenclature and syntax of Dublin Core Metadata, ISO 15836. Table 4 provides some interpretation of Dublin Core elements in the context of metadata for geospatial data and services.
The core elements are recommended for a response but do not need to be populated. The support for a common syntax for the returnable properties as a “common” Summary Element Set is defined in the protocol binding clauses.
<?xml version=“1.0” encoding=“UTF-8”?> =" ../../../csw/2.0.2/CSW-discovery.xsd"> :PropertyIsEqualTo> <ogc:PropertyName>dc:type</ogc:PropertyName> <ogc:Literal>Service</ogc:Literal> </ogc:PropertyIsEqualTo> <ogc:PropertyIsGreaterThanOrEqualTo> <ogc:PropertyName>dct:modified</ogc:PropertyName> <ogc:Literal>2004-03-01</ogc:Literal> </ogc:PropertyIsGreaterThanOrEqualTo> >
The response to such a query, might be:
<?xml version=“1.0” encoding=“UTF-8”?> <csw:Record xmlns: ) based on 30m horizontal and 15m vertical accuracy.</dct:abstract> <dc:identifier>ac522ef2-89a6-11db-91b1-7eea55d89593</dc:identifier> <dc:relation>OfferedBy</dc:relation> <dc:source>? SERVICE=CSW&REQUEST=GetRecordById&RECORD=dd1b2ce7-0722-4642-8cd4-6f885f132777</dc:source> <dc:rights>Copyright © 2011, State of Texas</dc:rights> <dc:type>Service</dc:type> <dc:title>Elevation Mapping Service for Texas</dc:title> <dct:modified>2011>
6.3.4 Information structure and semantics
Some services that implement OGC Standards expect a rigid syntax for the information resources to be returned, whereas others do not. This subclause allows an Application Profile to be specific about what information content, syntax, and semantics are to be communicated over the service. The following items should be addressed in an Application Profile.
- Identify information resource types that can be requested. In the case of a catalogue service, the information resources being described by the metadata may include geographic data, imagery, services, controlled vocabularies, or schemas among a wide variety of possible types. This subclause allows the community to specify or generalise the resource types being described in metadata for their scope of application.
- Identify a public reference for the information being returned by the service (e.g. ISO 19115:2003 “Geographic Information – Metadata “). Include any semantic resources including data content model, dictionary, feature type catalogue, code lists, authorities, taxonomies, etc.
- Identify named groups of properties (element sets) that may be requested of the service (e.g. “brief,” “summary,” or “full”) and the valid format (syntax) for each element set. Identify valid schema(s) with respect to a given format to assist in the validation of response messages.
- Specialise the core queryable properties list by making some optional queryable attributes mandatory, deleting other optional attributes and adding queryable attributes that should be standard across all profile users
- Optional mapping of queryable and retrievable properties against other public metadata models or tags.
- Expected response/results syntax and content Message syntax and schemas (e.g. brief/full, individual elements).
7. General catalogue interface model
7.1 Introduction
The General Catalogue Interface Model (GCIM) provides a set of abstract service interfaces that support the discovery, access, maintenance and organization of catalogues of geospatial information and related resources. The interfaces specified are intended to allow users or application software to find information that exists in multiple distributed computing environments, including the World Wide Web (WWW) environment.
Implementation design guidance is included in specified protocol binding Parts of this standard. Each protocol binding includes a mapping from the general interfaces, operations, and parameters specified in this clause to the constructs available in a chosen protocol. In most, but not all, protocol bindings, there may be restrictions or refinements on implementation of the General Model agreed within an implementation community. This sub-clause provides an overview of the portions of the GCIM that are realised by implementations described in other Catalogue Service Part documents.
Application profiles are intended to further document implementation choices. An Application Profile is predicated on the existence of one protocol binding as a Part of this standard.
Figure 2 - Reference model architecture shows the Reference Architecture assumed for development of the OGC Catalogue Interface. The architecture is a multi-tier arrangement of clients and servers. To provide a context, the architecture shows more than just catalogue interfaces. The bold lines illustrate the scope of OGC Catalogue.
The Application Client shown in Figure 2 - Reference model architecture interfaces with the Catalogue Service using the OGC Catalogue Interface. The Catalogue Service may draw on one of three sources to respond to the Catalogue Service request: a Metadata Repository local to the Catalogue Service, a Resource service, or another Catalogue Service. The interface to the local Metadata Repository is internal to the Catalogue Service. The interface to the Resource service can be a private or OGC Interface. The interface between Catalogue Services is the OGC Catalogue Interface. In this case, a Catalogue Service is acting as both a client and server. Data returned from an OGC Catalogue Service query is processed by the requesting Catalogue Service to return the data appropriate to the original Catalogue request. See Annex A for more about Distributed Searching.
7.2 Interface definitions
7.2.1 Overview
Figure 3 - General OGC catalogue UML static model is a general UML model of OGC catalogue service interfaces, in the form of a class diagram. Operation signatures have been suppressed in this figure for simplicity but are described in detail below. This model shows the Catalogue Service class plus five other classes with which that class are associated. A Catalogue Service is a realization of an OGC Service. Each instance of the Catalogue Service class is associated with one or more of these other classes, depending on the abilities included in that service instance. Each of these other classes defines one or several related operations that can be included in a Catalogue Service class instance. The Catalogue Service class directly includes only the serviceTypeID attribute, with a fixed value for the service type.
In Figure 3 - General OGC catalogue UML static model, an instance of the CatalogService type is a composite object that is a high-level characterization of a catalogue service. Its constituent objects are themselves components that provide functional behaviours to address particular areas of concern. A protocol binding may realise specific configurations of these components to serve different purposes (e.g. a read-only catalogue for discovery, or a transactional catalogue for discovery and publication).
The associated classes shown in this figure are mandatory or optional for implementation as indicated by the association multiplicity in the UML diagram. Therefore, a compliant catalogue service shall implement the OGC_Service, CatalogService, and Discovery classes. An application profile or protocol binding can implement additional classes associated with the Catalogue Service class. A catalogue implementation shall recognise all operations defined within each included class, and shall generate a message indicating when a particular operation is not implemented.
The protocol binding clauses of this standard provide more detail on the implementation of these conceptual interfaces. For example, the names of the classes and operations in this general UML model are changed in some of the protocol bindings. The names of some operation parameters are also changed in some protocol bindings.
Application Profiles may further specialise the implementation of these interfaces and their operations, including adding classes. In general, however, the interfaces and operations described here shall have the same semantics and granularity of interaction regardless of the protocol binding used.
The Catalogue Service class can be associated with the following classes.
a) OGC_Service class, which provides the getCapabilities operation that retrieves catalogue service metadata and the getResourceById operation that will retrieve an object by query on its identifier only. This class is always realised by the Catalogue Service class, and is thus always implemented by a Catalogue Service implementation.
b) Discovery class, which provides three operations for client discovery of resources registered in a catalogue. This class has a required association from the Catalogue Service class, and is thus always implemented by a Catalogue Service implementation. The “query” operation searches the catalogued metadata and produces a result set containing references to all the resources that satisfy the query. This operation returns metadata for some or all of the found result set. The optional describeRecordType operation retrieves the type definition used by metadata of one or more registered resource types. The optional getDomain operation retrieves information about the valid values of one or more named metadata properties.
c) Manager class, which provides two operations for inserting, updating, and deleting the metadata by which resources are registered in a catalogue. This class has an optional association from the Catalogue Service class; this interface is implemented by the Catalogue Service implementation. The transaction operation performs a specified set of “insert”, “update”, and “delete” actions on metadata items stored by a Catalogue Service implementation—this enables a “push” style of publication. The harvestResource operation requests the Catalogue Service to retrieve resource metadata from a specified location, often on a regular basis—this behaviour reflects a ‘pull’ style of publication.
The three classes associated with the Catalogue Service class allow different OGC catalogue services to provide significantly different abilities. A particular protocol binding is used by each Application Profile and a particular set of these catalogue service classes is specified by each Application Profile.
Each of the catalogue classes is described further in the following subclauses. These subclauses discuss the operations and parameters of each operation in this general model. Specific protocol bindings or application profiles can define additional parameters. For example, the HTTP Protocol Binding adds the Service, Request, and Version parameters to all operation requests to be consistent with other OGC Web Services.
7.2.2 Catalogue Service class
The Catalogue Service class provides the foundation for an OGC catalogue service. The Catalogue Service class directly includes only the serviceTypeID attribute, as specified in Table 5. In most cases, this attribute will not be directly visible to catalogue clients.
7.2.3 OGC_Service class
7.2.3.1 Introduction
The OGC_Service class allows clients to retrieve service metadata by providing the getCapabilities operation. This class is always realised by the Catalogue Service class, and is thus always implemented by a Catalogue Service instance. Capabilities are described further in OGC Web Service Common Implementation Specification 2.0.
NOTE This getCapabilities operation corresponds to CatalogueService.explainServer operation in OGC Catalogue version 1.1.1.
7.2.3.2 getCapabilities operation
The getCapabilities operation is specified in Table 6.
The getCapabilities operation is inherited from OWS Common 2.0 and is specialized to describe service capabilities of a catalogue.
The normal GetCapabilities operation response is a service metadata document that includes the “section” attributes listed and defined in Table 7, as selected by the “section” attribute in the operation request.
NOTE 1 The term “Capabilities XML” document was previously used for what is here called “service metadata” document. The term “service metadata” is now used because it is more descriptive and is compliant with OGC Abstract Specification Topic 12 (ISO 19119).
NOTE 2 This general model assumes that operation failure will be signalled to the client in a manner specified by each protocol binding.
7.2.3.3 getResourceById operation
The getResourceById operation is inherited from OWS Common and supports the request of one or more resources – in this case full, structured metadata records – from the catalogue. Records are discovered through the query operation whose response includes the identifier(s) of the record(s) meeting the conditions of the query. These identifiers are passed via the getResourceById to retrieve records from the catalogue in bulk.
7.2.4 Discovery class
7.2.4.1 Introduction
The Discovery class allows clients to discover resources registered in a catalogue, by providing three operations named “query”, describeRecordType, and getDomain. This class has a required association from the Catalogue Service class, and is thus always implemented by all Catalogue Service implementations. All Discovery class operations are stateless.
7.2.4.2 “query” operationThe “query” operation is described in Table 8. Figure 4 provides a UML model of the “query” operation that shows the complete Discovery class with the QueryRequest and QueryResponse classes and the classes they use. The operation request includes the attributes and association role names listed and defined in the following tables. The normal operation response includes the attributes and association role names listed and defined in Table 14.
NOTE This general model assumes that operation failure will be signalled to the client in a manner specified by each protocol binding.
7.2.4.3 describeRecordType operation
The describeRecordType operation is more completely specified in Table 15 — Definition of describeRecordType operation.
Table 15 provides a UML model of the optional describeRecordType operation that shows the complete Discovery class with the DescribeRecordTypeRequest and DescribeRecordTypeResponse classes and the class they use. The operation request includes the attributes and association role name listed and defined in Table 16. The normal operation response includes the attributes and association role name listed and defined in Table 17.
NOTE The describeRecordType operation corresponds to CG_Discovery.explainCollection operation in OGC Catalogue version 1.1.1.
7.2.4.4 getDomain operation
The optional getDomain operation is more completely specified in Table 18, which provides a UML model of the getDomain operation that shows the complete Discovery class with the GetDomainRequest and GetDomainResponse classes and the class they use. The operation request includes the attributes listed and defined in Table 19. The normal operation response includes the attributes and association role name listed and defined in Table 20.
7.2.5 Manager class
7.2.5.1 Introduction
The Manager class allows a client to insert, update and/or delete catalogue content. This class has an optional association from the CatalogueService class; it is not required that a catalogue service implement publishing functionality. Two operations are provided: “transaction” and “harvestResource”. Both are optional operations.
The “transaction” operation allows a client to formulate a transaction, and send it to the catalogue to be processed. The transaction may contain metadata records and elements of the information model that the catalogue understands. To use the transaction operation, the client must know something about the information model that the catalogue implements.
The “harvestResource” operation, on the other hand, directs the catalogue to retrieve an accessible metadata record and processes it for inclusion in the catalogue, perhaps periodically re-fetching the metadata records to refresh the information in the catalogue. The client does not need to be aware of the information model of the catalogue when using the “harvestResource” operation, since the catalogue itself is doing the work required to process the information. The client is simply pointing to where the metadata resource to be harvested is.
7.2.5.2 ”transaction” operation
The “transaction” operation is more completely specified in Table 21. Figure 7 provides a UML model of the “transaction” operation that shows the complete Manager class with the TransactionRequest and TransactionResponse classes and the classes they use.The operation request includes the attributes listed and defined in Table 22. The normal operation response includes the attributes listed and defined in Table 23.
7.2.5.3 harvestResource operation
The harvestResource operation facilitates the retrieval of remote resources from a designated location and provides for optional transactions on the local catalogue. The harvestResource operation is described in Table 24. Figure 8 provides a UML model of the “harvestResource” operation that shows the complete Manager class with the HarvestResourceRequest and HarvestResourceResponse classes.The operation request includes the attributes listed and defined in Table 25. The normal operation response includes the attributes listed and defined in Table 26.
This general model assumes that operation failure will be signalled to the client in a manner specified by each protocol binding.
8. Conformance classes and specialisation
8.1 Introduction
This subclause provides an overview of the core elements of the General Catalogue Model and how these may be used in protocol bindings and application profiles.
The General Catalogue Model consists of an abstract model and a General Interface Model. The abstract query model specifies a BNF grammar for a minimal query syntax and a set of core search attributes (names, definitions, conceptual datatypes). The General Interface Model specifies a set of service interfaces that support the discovery, access, maintenance and organization of catalogues of geospatial information and related resources; these interfaces may be bound to multiple application protocols, including the HTTP protocol that underlies the World Wide Web.
Implementations are constrained by the protocol binding parts of this standard, which depend on this general model. Each protocol binding includes a mapping from the general interfaces, operations, and parameters specified in this clause to the constructs available in a chosen protocol. Application profiles are intended to further document implementation choices.
An Application Profile is based on one of the protocol bindings in the base specification. In the case of the Catalogue Services Standard, a profile should reference the HTTP/1.1 protocol binding unless others are defined/recognized. In most, but not all, protocol bindings, there may be restrictions or refinements on implementation agreed within an implementation community. A graphic model of the relationships is shown in Figure 9. | https://docs.opengeospatial.org/is/12-168r6/12-168r6.html | 2021-06-13T02:58:32 | CC-MAIN-2021-25 | 1623487598213.5 | [] | docs.opengeospatial.org |
How demo including a few examples of post lists that you can achieve with it. I recommend to also visit this demo with your mobile phone to see how it nicely adapts.
The design is optimized for mobile devices. If you plan to include many post grids, it won’t impact the speed of your site when the option “ Load images on scroll” is enabled in your global options.
1 - Open the customizer interface
On front you can click on the Nimble Builder icon in the admin bar.
2 - Drag and drop the post grid module
3 - Customize
You can embed an unlimited number of post grids in any page of your site. The module offers 2 layouts for your posts : list or grid, and many customization options for almost any element of the grid.
Here’s an overview of the main features included :
- set the number of posts
- filter by category, several categories possible
- select a layout : grid or list
- set the number of columns
- customize the alignment and visibility of all blocks : title, post thumbnail, category, excerpt
- customize the length of the excerpt
- customize the space between columns and rows, per device type ( desktop, tablet, smartphone )
- customize the dimensions of the post thumbnail
- select which post metas to display : category, author, post date, comment number
- customize the font size per device type
- customize the font family of each text blocks : post title, excerpt, metas
When you are satisfied, publish your draft and exit the customizer. Your post grid is now live.
Browser support
The grid module supports most browsers, but it does not support IE11 and lower versions ( <1% of browser usage ). | https://docs.presscustomizr.com/article/393-how-to-add-post-grids-to-any-wordpress-page-with-nimble-builder | 2021-06-13T02:34:09 | CC-MAIN-2021-25 | 1623487598213.5 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5ca70a3d0428633d2cf47eb8/file-oUU15IRGq3.jpg',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5ca70a840428633d2cf47ebb/file-VkuG7Cm8rI.jpg',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5ca70a8d0428633d2cf47ebe/file-YSHlxk41L5.jpg',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5ca70aac0428633d2cf47ec0/file-5zjaxeZfwx.jpg',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/5ca70ab22c7d3a154461cf0c/file-tr2wCT8VYd.jpg',
None], dtype=object) ] | docs.presscustomizr.com |
Echo Roles allow you to execute Guild-level commands. It also allows you to execute Player-level commands for other people, besides yourself.
There are currently only 2 defined Echo Roles:
EchoCommander - this role grants permission at the guild level. An EchoCommander will not be allowed to execute commands for anyone outside of his/her guild.
EchoAdmin - this role grants permission at the Discord server level. An EchoAdmin is allowed to execute commands for anyone that is a member of the server in which the command was executed.
All Guild-level commands:
All Player-level commands:
Echo Roles are nothing more than Discord roles with specific names. Therefore, anyone with the Discord permission to create roles can create a role that matches one of the roles above.
The roles are case-sensitive. This means that "echo" is not the same thing as "Echo."
There is also a handy EchoStation bot command which will create them for you (if EchoStation has the correct permissions in your server to do so):
Echo Roles are nothing more than Discord roles with specific names. EchoStation looks at your Discord roles when you execute certain commands to verify that you are allowed to execute those commands.
EchoStation is using these roles as a very light form of authentication (to prevent trolls from misusing certain commands), and therefore cannot know who is trustworthy or not to have such responsibility. Therefore, someone in your server with permissions to grant roles must give you one of these Echo Roles. As long as you have the role, EchoStation will honor it. | https://docs.echobase.app/echostation/echo-roles | 2021-06-13T01:23:12 | CC-MAIN-2021-25 | 1623487598213.5 | [] | docs.echobase.app |
Data Inspect has three different modes when searching topics:
Sample: this will scan for records across an even distribution of partitions. You can use the result metadata from the search results to see which partitions have been scanned.
Partition: this allows for records to be scanned across a specific partition and optional offset.
Key: this will scan for absolute matches of the key text provided. The search will only scan the partition the key belongs to.
You can specify the starting point for where data inspect will scan for records on a topic. By default kPow will search for recent messages on a topic. From the "Window" dropdown you can specify a custom timestamp or datetime for your starting point.
By default, the
TOPIC_INSPECT access policy is disabled. To view the contents of messages in the data inspect UI, see the configuration section of this document.
See the Serdes section for more information about using Data Inspect serdes.
If you have selected a key or value serdes for the data inspect query, you can also apply an optional filter to your query. See the kJQ Filters section for documentation on the query language.
If you check "Include Message Headers?" in the data inspect form, data inspect will also return the contents of each records header, deserialized as a JSON map. You can also filter headers in the same way as any key/value kJQ filters.
Data inspect queries have a start and end cursor position. The start is defined by the window of the query, and the end position is the time in which the query was first executed. Once a query has been executed, the query metadata has the notion of "progress": how many records you have scanned, and how many records remain for the query. The green progress bar above the toolbar represents the total progress of the query. You can always click "Continue consuming" to keep progressing your cursor.
If you have any Data policies that apply to the query that was executed, the toolbar will show you what policies matches your queries, and the redactions applied.
Clicking the "Show metadata" button in the results toolbar will expand the Result Metadata Table, which is a table of your queries cursors across all partitions.
Partition: the partition the row relates to
Partition start: the earliest offset of this partition
Partition end: the most recent offset of this partition
Query start: the offset that data inspect started scanning from for this partition. Calculated from the query window.
Query end: the offset that data inspect will scan up to. Calculated from the query window.
Scanned Records: the number of records in this partition that have been scanned
Filtered Records: the number of records that have positively matched the key or value filters specified in the query
Remaining Records: the number of records that remain in the query window.
Consumed: the percentage of overall records consumed for this partition.
SAMPLER_CONSUMER_THREADS - kPow creates a connection pool of consumers when querying with data inspect. This environment variables specifies the number of consumer threads globally available in the pol. Default: 6.
SAMPLER_TIMEOUT_MS - a query will finish querying once 100 positively matched records have been found or after a timeout (default 7s). You can always progress the query and continue scanning by clicking "Continue Consuming".
Increase the sampler timeout to run longer queries and the consumer threads to query more partitions in parallel.
The default configuration should be suitable for most installations.
See the Serdes section for details on how to configure custom serdes, integrate schema registry and more for data inspect.
To enable inspection of key/value/header contents of records, set the
TOPIC_INSPECT environment variable to
true. If you are using role-based access control, view our guide here.
To configure data policies (configurable redaction of Data Inspection results) view our Data Policies guide. | https://docs.kpow.io/features/data-inspect | 2021-06-13T01:25:17 | CC-MAIN-2021-25 | 1623487598213.5 | [] | docs.kpow.io |
Azure Key Vault basic concepts
Azure Key Vault is a cloud service for securely storing and accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, certificates, or cryptographic keys. Key Vault service supports two types of containers: vaults and managed hardware security module(HSM) pools. Vaults support storing software and HSM-backed keys, secrets, and certificates. Managed HSM pools only support HSM-backed keys. See Azure Key Vault REST API overview for complete details.
Here are other important terms:
Tenant: A tenant is the organization that owns and manages a specific instance of Microsoft cloud services. It's most often used to refer to the set of Azure and Microsoft 365 services for an organization.
Vault owner: A vault owner can create a key vault and gain full access and control over it. The vault owner can also set up auditing to log who accesses secrets and keys. Administrators can control the key lifecycle. They can roll to a new version of the key, back it up, and do related tasks.
Vault consumer: A vault consumer can perform actions on the assets inside the key vault when the vault owner grants the consumer access. The available actions depend on the permissions granted.
Managed HSM Administrators: Users who are assigned the Administrator role have complete control over a Managed HSM pool. They can create more role assignments to delegate controlled access to other users.
Managed HSM Crypto Officer/User: Built-in roles that are usually assigned to users or service principals that will perform cryptographic operations using keys in Managed HSM. Crypto User can create new keys, but cannot delete keys.
Managed HSM Crypto Service Encryption User: Built-in role that is usually assigned to a service accounts managed service identity (e.g. Storage account) for encryption of data at rest with customer managed key.
Resource: A resource is a manageable item that's available through Azure. Common examples are virtual machine, storage account, web app, database, and virtual network. There are many more..
Security principal: An Azure security principal is a security identity that user-created apps, services, and automation tools use to access specific Azure resources. Think of it as a "user identity" (username and password or certificate) with a specific role, and tightly controlled permissions. A security principal should only need to do specific things, unlike a general user identity. It improves security if you grant it only the minimum permission level that it needs to perform its management tasks. A security principal used with an application or service is specifically called a service principal.
Azure Active Directory (Azure AD): Azure AD is the Active Directory service for a tenant. Each directory has one or more domains. A directory can have many subscriptions associated with it, but only one tenant.
Azure tenant ID: A tenant ID is a unique way to identify an Azure AD instance within an Azure subscription.
Managed identities: Azure Key Vault provides a way to securely store credentials and other keys and secrets, but your code needs to authenticate to Key Vault to retrieve them. Using a managed identity makes solving this problem simpler by giving Azure services an automatically managed identity in Azure AD. You can use this identity to authenticate to Key Vault or any service that supports Azure AD authentication, without having any credentials in your code. For more information, see the following image and the overview of managed identities for Azure resources.
Authentication
To do any operations with Key Vault, you first need to authenticate to it. There are three ways to authenticate to Key Vault:
- Managed identities for Azure resources: When you deploy an app on a virtual machine in Azure, you can assign an identity to your virtual machine that has access to Key Vault. You can also assign identities to other Azure resources. The benefit of this approach is that the app or service isn't managing the rotation of the first secret. Azure automatically rotates the identity. We recommend this approach as a best practice.
- Service principal and certificate: You can use a service principal and an associated certificate that has access to Key Vault. We don't recommend this approach because the application owner or developer must rotate the certificate.
- Service principal and secret: Although you can use a service principal and a secret to authenticate to Key Vault, we don't recommend it. It's hard to automatically rotate the bootstrap secret that's used to authenticate to Key Vault.
Encryption of data in transit
Azure Key Vault enforces Transport Layer Security (TLS) protocol to protect data when it’s traveling between Azure Key vault and clients. Clients negotiate a TLS connection with Azure Key Vault. TLS provides strong authentication, message privacy, and integrity (enabling detection of message tampering, interception, and forgery), interoperability, algorithm flexibility, and ease of deployment and use.
Perfect Forward Secrecy (PFS) protects connections between customers’ client systems and Microsoft cloud services by unique keys. Connections also use RSA-based 2,048-bit encryption key lengths. This combination makes it difficult for someone to intercept and access data that is in transit.
Key Vault roles
Use the following table to better understand how Key Vault can help to meet the needs of developers and security administrators.
Anybody with an Azure subscription can create and use key vaults. Although Key Vault benefits developers and security administrators, it can be implemented and managed by an organization's administrator who manages other Azure services. For example, this administrator can sign in with an Azure subscription, create a vault for the organization in which to store keys, and then be responsible for operational tasks like these:
- Create or import a key or secret
- Revoke or delete a key or secret
- Authorize users or applications to access the key vault, so they can then manage or use its keys and secrets
- Configure key usage (for example, sign or encrypt)
- Monitor key usage
This administrator then gives developers URIs to call from their applications. This administrator also gives key usage logging information to the security administrator.
Developers can also manage the keys directly, by using APIs. For more information, see the Key Vault developer's guide.
Next steps
- Learn how to secure your managed HSM pools
Azure Key Vault is available in most regions. For more information, see the Key Vault pricing page. | https://docs.microsoft.com/en-us/azure/key-vault/general/basic-concepts | 2021-06-13T03:37:22 | CC-MAIN-2021-25 | 1623487598213.5 | [array(['../media/key-vault-whatis/azurekeyvault_overview.png',
'Overview of how Azure Key Vault works'], dtype=object)] | docs.microsoft.com |
UIElement.
Pointer Pressed Event Property
Definition
Important
Some information relates to pre-released product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Gets the identifier for the PointerPressed routed event.
public: static property RoutedEvent ^ PointerPressedEvent { RoutedEvent ^ get(); };
static RoutedEvent PointerPressedEvent();
public static RoutedEvent PointerPressedEvent { get; }
Public Shared ReadOnly Property PointerPressedEvent As RoutedEvent
Property Value
The identifier for the PointerPressed routed event. | https://docs.microsoft.com/en-us/windows/winui/api/microsoft.ui.xaml.uielement.pointerpressedevent?view=winui-3.0 | 2021-06-13T03:40:23 | CC-MAIN-2021-25 | 1623487598213.5 | [] | docs.microsoft.com |
Part 1: Getting started
Once installation is complete, you are set!
- By default the "Physical Atmosphere" tab is now in the sidebar of Blender 3D viewport ("N" key) and "World Properties" tab in Properties panel. Click on the "Physical Atmosphere" tab and you'll see this:
- If you want, you can disable the sidebar panel, or rename it to something shorter (or longer?)
- Now enable it by ticking the uppermost checkbox "Atmosphere". To see the sky, you have to be in "Rendered" Viewport Shading mode (click on the 4th sphere in the list up in the right corner of 3D viewport)
Now before we move on, I'll explain what the addon just did by pressing the checkbox.
- It replaced the default world shader node with "StarlightAtmosphere" node. When you grey out the "Atmosphere" checkbox, it will again set it back to default world.
- It added a Sun lamp. The Sun lamp is used as the main light source to cast shadows from the Sun. When you grey out the "Atmosphere" checkbox, it will remove the Sun.
UI
You are now greeted by this list of variables to play with. Take a deep breath and have a look at those. It has fancy names like Kelvins, radiance, and absorption, but I use those to somehow standartize the variables. Other softwares and scientific tools use similar naming. Also, few of these will be soon replaced with more artist-friendly variables.
We have divided it into 5 sections - "Sun", "Atmosphere", "Stars", "Ground" and "Artistic Controls". You can hide each of them if you feel the view is getting cluttered. Also notice the "Reset" button after each section, it will reset the values to defaults.
Now let's quickly go through each of those sections.
Part 2: Sun
Sun is controlled by these 7 variables:
- Sun Azimuth
- Sun Elevation
- Sun Disk checkbox
- Sun Lamp checkbox
- Sun Angular Diameter
- Sun Temperature K
- Sun Radiance Intensity
Sun Position
Sun position in the sky is controlled by the first two variables - Azimuth and Elevation. Azimuth moves the Sun horizontally, elevation - vertically. The values are angle in degrees. This is one of many ways you can control Sun position. You can move the Sun also by rotating the Sun object itself or use a SunPosition addon that comes with Blender. These two values are added for convenience if your scene is huge and you have lost the Sun object.
Sun Visibility
Now the next two parameters might seem confusing for some.
- Sun Disk checkbox, toggles the visibility of the sun disk in the sky.
- Sun Lamp checkbox, toggles the Sun Lamp intensity.
I'll explain why these parameters can be useful. There are few specific cases where you don't want to see the Sun disk visible in the sky or don't want your scene illuminated by a parametric lamp. For example if you use Cycles, you can avoid using a parametric Sun Lamp and use the addon as HDRI.
- By disabling both, you get illumination by the sky only. No direct light.
- Sun disk enabled and Sun Lamp disabled, you essentially get a HDRI. Switch to Cycles and you will see how the Sun Disk is a light source - you get shadows.
- Sun disk disabled and Sun Lamp enabled, you have shadows and direct light, but the sun disk will not be visible in the sky. (can't think of a useful case for this setting)
- Both enabled - you have both, direct light and sun disk in the sky.
If you compare Cycles renders with Sun Lamp enabled and Sun Lamp disabled, there might not be a visual difference. In Eevee you will see huge difference in lighting and with no parametric light source there will be no shadows. This is because Cycles will sample every point in the sky as a "light source" and you will see shadows, while Eevee only approximates the lighting and uses the sky as a "irradiance map".
Sun Disk
- Sun disk size in the sky is controlled by "Angular Diameter" parameter. It also changes the Sun Lamp Angle value for soft shadows. Larger the value, bigger the Sun disk, brighter it gets.
NOTE: Right now the Sun disk is a 2D circle with parametric apparent diameter, intensity is multiplied with a limb darkening factor. In future releases Sun will be calculated differently - as a real 3D sphere with actual physical diameter placed reeealy far away, but not at infinity, which will actually allow to travel to the Sun
- "Temperature K" changes the color of the Sun disk. Bigger the value, bluer the Sun. In theory small Stars are hotter, thus bluer, and big stars colder - redder. I wanted to include the calculation of that, but it would remove artistic control, so left it as a manual variable.
- "Intensity" changes the Sun radiance intensity in Watt·sr/m2. Default value is 20.0 MegaWatt·sr/m2 (calculated by dividing solar constant with sun disk diameter in steridians)
NOTE: In future I might add an option to use Lux values | https://starlight-manual.readthedocs.io/en/latest/interface/ | 2021-06-13T03:11:53 | CC-MAIN-2021-25 | 1623487598213.5 | [array(['../img/UI/Kelvins.jpeg', 'GUI_sun'], dtype=object)] | starlight-manual.readthedocs.io |
Pass your actual test with our Avaya 72400X training material at first attempt
Last Updated: Jun 11, 2021
No. of Questions: 60 Questions & Answers with Testing Engine
Latest Version: V12.35
Download Limit: Unlimited
We provide the most up to date and accurate 72400X questions and answers which are the best for clearing the actual test. Instantly download of the Avaya Avaya Equinox® Solution with Avaya Aura® Collaboration Applications Support Exam exam practice torrent is available for all of you. 100% pass is our guarantee of 72400 72400X actual test that can prove a great deal about your professional ability, we are here to introduce our ACSS 72400X practice torrent to you. With our heartfelt sincerity, we want to help you get acquainted with our 72400X exam vce. The introduction is mentioned as follows.
Our 72400X latest vce team with information and questions based on real knowledge the exam required for candidates. All these useful materials ascribe to the hardworking of our professional experts. They not only are professional experts dedicated to this 72400X training material painstakingly but pooling ideals from various channels like examiners, former candidates and buyers. To make the 72400X actual questions more perfect, they wrote our 72400X prep training with perfect arrangement and scientific compilation of messages, so you do not need to plunge into other numerous materials to find the perfect one anymore. They will offer you the best help with our 72400X questions & answers.
We offer three versions of 72400X practice pdf for you and help you give scope to your initiative according to your taste and preference. Tens of thousands of candidates have fostered learning abilities by using our 72400X updated torrent. Let us get to know the three versions of we have developed three versions of 72400X training vce for your reference.
The PDF version has a large number of actual questions, and allows you to take notes when met with difficulties to notice the misunderstanding in the process of reviewing. The APP version of ACSS 72400 72400X free pdf maybe too large to afford by themselves, which is superfluous worry in reality. Our 72400X exam training is of high quality and accuracy accompanied with desirable prices which is exactly affordable to everyone. And we offer some discounts at intervals, is not that amazing?
As online products, our 72400X : Avaya Equinox® Solution with Avaya Aura® Collaboration Applications Support Exam useful training can be obtained immediately after you placing your order. It is convenient to get. Although you cannot touch them, but we offer free demos before you really choose our three versions of 72400X practice materials. Transcending over distance limitations, you do not need to wait for delivery or tiresome to buy in physical store but can begin your journey as soon as possible. We promise that once you have experience of our 72400X practice materials once, you will be thankful all lifetime long for the benefits it may bring in the future.so our Avaya 72400X practice guide are not harmful to the detriment of your personal interests but full of benefits for you.
Jonathan
Martin
Owen
Sebastian
Wayne
Atalanta
Exam4Docs is the world's largest certification preparation company with 99.6% Pass Rate History from 69850+ Satisfied Customers in 148 Countries.
Over 69850+ Satisfied Customers | https://www.exam4docs.com/avaya-equinox-solution-with-avaya-aura-collaboration-applications-support-exam-accurate-pdf-12291.html | 2021-06-13T01:39:57 | CC-MAIN-2021-25 | 1623487598213.5 | [] | www.exam4docs.com |
Requests format
Requests are sent by POST method using HTTP/1.1 protocol. The method is also mentioned on each request description page.
Request parameters are placed in the sent structure. Some parameters can be sent in URL (API key, format).
The input data format must be indicated in the Content-Type HTTP header.
Possible header values:
application/json — JSON format
- application/xml - XML format
Symbols must have UTF-8 coding.
POST<method name>
Authorization
API token must be sent in request parameters for authorization. Example:
{ "token": "bfc505684d774e52b188fa1f003cd5ed", "db_id": 1, "resource_id": 1, "matching": "email", "email": "[email protected]", "data": { "_status": 0, "_fname": "Jim", "_lname": "Jones", "email": "[email protected]", "phones": ["+79000000000"] } }
API token can be created in the user panel, in section "Settings" - "Tokens". Master user rights are necessary to create a token.
API token is automatically generated after saving. You can also select a token name and configure access rights (in roles) and groups of objects available for this token.
Response format
The response format can be selected in the request header or parameters.
Response example (Successful operation):
{ "error": 0, "error_text": "Successful operation", "profile_id": "5f4fa1a5ce9448665fef548e" }
The following parameters are given in responses:
- error - error code
- error_text - error description
- profile_id - profile identifier (for successful operation)
Response codes
Request deduplication
If the connection fails at the moment of receiving the data, a second request may be sent. The platform will not accept a repeated request if it modifies the data in order to avoid duplicate events. Read more about repeated requests here. | https://docs.altcraft.com/display/UAD/API+interaction | 2021-06-13T03:31:50 | CC-MAIN-2021-25 | 1623487598213.5 | [] | docs.altcraft.com |
Product and service integrations with AWS CodeCommit
By default, CodeCommit is integrated with a number of AWS services. You can also use CodeCommit with products and services outside of AWS. The following information can help you configure CodeCommit to integrate with the products and services you use.
You can automatically build and deploy commits to a CodeCommit repository by integrating with CodePipeline. To learn more, follow the steps in the AWS for DevOps Getting Started Guide.
Integration with other AWS services
CodeCommit is integrated with the following AWS services:
Integration examples from the community
The following sections provide links to blog posts, articles, and community-provided examples.
These links are provided for informational purposes only, and should not be considered either a comprehensive list or an endorsement of the content of the examples. AWS is not responsible for the content or accuracy of external content.
Topics
Blog posts
Integrating SonarQube as a Pull Request Approver on AWS CodeCommit
Learn how to create a CodeCommit repository that requires a successful SonarQube quality analysis before pull requests can be merged.
Published December 12, 2019
Migration to AWS CodeCommit, AWS CodePipeline, and AWS CodeBuild From GitLab
Learn how to migrate multiple repositories to AWS CodeCommit from GitLab and set up a CI/CD pipeline using AWS CodePipeline and AWS CodeBuild.
Published November 22, 2019
Implementing GitFlow Using AWS CodePipeline, AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy
Learn how to implement GitFlow using AWS CodePipeline, AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy.
Published February 22, 2019
Using Git with AWS CodeCommit Across Multiple AWS Accounts
Learn how to manage your Git configuration across multiple Amazon Web Services accounts.
Published February 12, 2019
Validating AWS CodeCommit Pull Requests with AWS CodeBuildand AWS Lambda
Learn how to validate pull requests with AWS CodeCommit, AWS CodeBuild, and AWS Lambda. By running tests against the proposed changes prior to merging them into the default branch, you can help ensure a high level of quality in pull requests, catch any potential issues, and boost the confidence of the developer in relation to their changes.
Published February 11, 2019
Using Federated Identities with AWS CodeCommit
Learn how to access repositories in AWS CodeCommit using the identities used in your business.
Published October 5, 2018
Refining Access to Branches in AWS CodeCommit
Learn how to restrict commits to repository branches by creating and applying an IAM policy that uses a context key.
Published May 16, 2018
Replicate AWS CodeCommit Repositories Between Regions Using AWS Fargate
Learn how to set up continuous replication of a CodeCommit repository from one AWS region to another using a serverless architecture.
Published April 11, 2018
Distributing Your AWS OpsWorks for Chef Automate Infrastructure
Learn how to use CodePipeline, CodeCommit, CodeBuild, and AWS Lambda to ensure that cookbooks and other configurations are consistently deployed across two or more Chef Servers residing in one or more AWS Regions.
Published March 9, 2018
Peanut Butter and Chocolate: Azure Functions CI/CD Pipeline with AWS CodeCommit
Learn how to create a PowerShell-based Azure Functions CI/CD pipeline where the code is stored in a CodeCommit repository.
Published February 19, 2018
Continuous Deployment to Kubernetes Using AWS CodePipeline, AWS CodeCommit, AWS CodeBuild, Amazon ECR, and AWS Lambda
Learn how to use Kubernetes and AWS together to create a fully managed, continuous deployment pipeline for container based applications.
Published January 11, 2018
Use AWS CodeCommit Pull Requests to Request Code Reviews and Discuss Code
Learn how to use pull requests to review, comment upon, and interactively iterate on code changes in a CodeCommit repository.
Published November 20, 2017
Build Serverless AWS CodeCommit Workflows Using Amazon CloudWatch Events and JGit
Learn how to create CloudWatch Events rules that process changes in a repository using CodeCommit repository events and target actions in other AWS services. Examples include AWS Lambda functions that enforce Git commit message policies on commits, replicate a CodeCommit repository, and backing up a CodeCommit repository to Amazon S3.
Published August 3, 2017
Replicating and Automating Sync-Ups for a Repository with AWS CodeCommit
Learn how to back up or replicate a CodeCommit repository to another AWS region, and how to back up repositories hosted on other services to CodeCommit.
Published March 17, 2017
Migrating to AWS CodeCommit
Learn how to push code to two repositories as part of migrating from using another Git repository to CodeCommit when using SourceTree.
Published September 6, 2016
Set Up Continuous Testing with Appium, AWS CodeCommit, Jenkins, and AWS Device Farm
Learn how to create a continuous testing process for mobile devices using Appium, CodeCommit, Jenkins, and Device Farm.
Published February 2, 2016
Using AWS CodeCommit with Git Repositories in Multiple Amazon Web Services accounts
Learn how to clone your CodeCommit repository and, in one command, configure the credential helper to use a specific IAM role for connections to that repository.
Published November 2015
Integrating AWS OpsWorks and AWS CodeCommit
Learn how AWS OpsWorks can automatically fetch Apps and Chef cookbooks from CodeCommit.
Published August 25, 2015
Using AWS CodeCommit and GitHub Credential Helpers
Learn how to configure your gitconfig file to work with both CodeCommit and GitHub credential helpers.
Published September 2015
Using AWS CodeCommit from Eclipse
Learn how to use the EGit tools in Eclipse to work with CodeCommit.
Published August 2015
AWS CodeCommit with Amazon EC2 Role Credentials
Learn how to use an instance profile for Amazon EC2 when configuring automated agent access to a CodeCommit repository.
Published July 2015
Integrating AWS CodeCommit with Jenkins
Learn how to use CodeCommit and Jenkins to support two simple continuous integration (CI) scenarios.
Published July 2015
Integrating AWS CodeCommit with Review Board
Learn how to integrate CodeCommit into a development workflow using the Review Board
code review system.
Published July 2015
Code samples
The following are code samples that might be of interest to CodeCommit users.
Mac OS X Script to Periodically Delete Cached Credentials in the OS X Certificate Store
If you use the credential helper for CodeCommit on Mac OS X, you are likely familiar with the problem with cached credentials. This script demonstrate one solution.
Author: Nico Coetzee
Published February 2016 | https://docs.aws.amazon.com/codecommit/latest/userguide/integrations.html | 2021-06-13T03:27:45 | CC-MAIN-2021-25 | 1623487598213.5 | [] | docs.aws.amazon.com |
About Cookbook Versioning
A cookbook version represents a set of functionality that is different from the cookbook on which it is based. A version may exist for many reasons, such as ensuring the correct use of a third-party component, updating a bug fix, or adding an improvement. A cookbook version is defined using syntax and operators, may be associated with environments, cookbook metadata, and/or run-lists, and may be frozen (to prevent unwanted updates from being made).
A cookbook version is maintained just like a cookbook, with regard to source control, uploading it to the Chef Infra Server, and how Chef Infra Client applies that cookbook when configuring nodes.
Syntax
A cookbook version always takes the form x.y.z, where x, y, and z are decimal numbers that are used to represent major (x), minor (y), and patch (z) versions. A two-part version (x.y) is also allowed. Alphanumeric version numbers (1.2.a3) and version numbers with more than three parts (1.2.3.4) are not allowed.
Constraints
A version constraint is a string that combines the cookbook version syntax with an operator, in the following format:
operator cookbook_version_syntax
Note
1.0or
1.0.1; do not use
1.
The following operators may be used:
For example, a version constraint for “equals version 1.0.7” is expressed like this:
= 1.0.7
A version constraint for “greater than version 1.0.2” is expressed like this:
> 1.0.2
An optimistic version constraint is one that looks for versions greater than or equal to the specified version. For example:
>= 2.6.5
will match cookbooks greater than or equal to 2.6.5, such as 2.6.5, 2.6.7 or 3.1.1.
A pessimistic version constraint is one that will find the upper limit version number within the range specified by the minor version number or patch version number. For example, a pessimistic version constraint for minor version numbers:
~> 2.6
will match cookbooks that are greater than or equal to version 2.6, but less than version 3.0.
Or, a pessimistic version constraint for patch version numbers:
~> 2.6.5
will match cookbooks that are greater than or equal to version 2.6.5, but less than version 2.7.0.
Or, a pessimistic version constraint that matches cookbooks less than a version number:
< 2.3.4
or will match cookbooks less than or equal to a specific version number:
<= 2.6.5
Metadata
Every cookbook requires a small amount of metadata. A file named metadata.rb is located at the top of every cookbook directory structure. The contents of the metadata.rb file provides information that helps Chef Infra Client and Server correctly deploy cookbooks to each node.
Versions and version constraints can be specified in a cookbook’s
metadata.rb file by using the following functions. Each function accepts
a name and an optional version constraint; if a version constraint is
not provided,
>= 0.0.0 is used as the default.
Environments
An environment can use version constraints to specify a list of allowed cookbook versions by specifying the cookbook’s name, along with the version constraint. For example:
cookbook 'apache2', '~> 1.2.3'
Or:
cookbook 'runit', '= 4.2.0'
If a cookbook is not explicitly given a version constraint the environment will assume the cookbook has no version constraint and will use any version of that cookbook with any node in the environment.
Freeze Versions
A cookbook version can be frozen, which will prevent updates from being made to that version of a cookbook. (A user can always upload a new version of a cookbook.) Using cookbook versions that are frozen within environments is a reliable way to keep a production environment safe from accidental updates while testing changes that are made to a development infrastructure.
For example, to freeze a cookbook version using knife, enter:
knife cookbook upload redis --freeze
To return:
Uploading redis... Upload completed
Once a cookbook version is frozen, only by using the
--force option
can an update be made. For example:
knife cookbook upload redis --force
Without the
--force option specified, an error will be returned
similar to:
Version 0.0.0 of cookbook redis is frozen. Use --force to override
Version Source Control
There are two strategies to consider when using version control as part of the cookbook management process:
- Use maximum version control when it is important to keep every bit of data within version control
- Use branch tracking when cookbooks are being managed in separate environments using git branches and the versioning policy information is already stored in a cookbook’s metadata.
Branch Tracking
Using a branch tracking strategy requires that a branch for each environment exists in the source control and that each cookbook’s versioning policy is tracked at the branch level. This approach is relatively simple and lightweight: for development environments that track the latest cookbooks, just bump the version before a cookbook is uploaded for testing. For any cookbooks that require higher levels of version control, knife allows cookbooks to be uploaded to specific environments and for cookbooks to be frozen (which prevents others from being able to make changes to that cookbook).
The typical workflow with a branch tracking
Maximum Versions
Using a maximum version control strategy is required when everything needs to be tracked in source control. This approach is very similar to a branch tracking strategy while the cookbook is in development and being tested, but is more complicated and time-consuming (and requires file-level editing for environment data) in order to get the cookbook deployed to a production environment.
The typical workflow with a maximum
Then modify the environment so that it prefers the newly uploaded version:
(vim|emacs|mate|ed) YOUR_REPO/environments/production.rb
Upload the updated environment:
knife environment from file production.rb
And then deploy the new cookbook version.
Was this page helpful? | https://docs.chef.io/cookbook_versioning/ | 2021-06-13T01:47:19 | CC-MAIN-2021-25 | 1623487598213.5 | [] | docs.chef.io |
Authenticated Extractors
What is an Authenticated Extractor?
An Authenticated Extractor is an extractor whos data sits behind a login. Meaning you must be logged in as a user on the target website in order to extract the data you need.
Extractor Studio allows any robot to behave as an Authenticated Extractor, but requires some additional configuration in order to do so.
How do Authenticated Extractors work?
Before building an Authenticated Extractor it’s important to understand how they will work at runtime.
Browser Session
For every Extractor that runs on import.io (Authenticated or not) browser sessions are used in order to navigate to the target website and perform the needed actions.
For regular Extractors (not Authenticated) the browser session and state is not important. Think of it as opening an incognito tab for each input you wish to extract. Browser state is not persisted between extractions and the name of the game is to extract many inputs in parallel, with out a care for for session cookies and the like.
As for Authenticated Extractors the browser state and session are important, we want to make sure the "user" remains logged in otherwise our data may be invalid or not available. For this reason Authenticated Extractors must first log in (once) to the website before attempting to extract any inputs, and be aware of this session becoming invalidated so that they can attempt to log in again.
Auth Interactions
Obviously for Authenticated Extractors we must first log in before attempting to extract the data. "Auth Interactions" serve this purpose.
"Auth Interactions" map to
authInteractions on the extractor runtime configuration and consist of an interaction sequence to be performed in order to log the user in.
authInteractions execute once before any extraction inputs are attempted, and will only execute again if the
checkAuthInteractions throw an error.
"Auth Interactions" are defined in the
authentication section of the robot template, more on that in the "Configuration" section below.
Check Authentication
Throughout your data extraction the target website may log you out or the browser session may be invalidated. This of course will cause the data you’re seeking to either not be present or incorrect. "Check Authentications" serve as a means to validate your session prior to attempting to extract data.
You can "check" that your auth session is still valid by configuring "Check Authentication" actions. These map to "checkAuthInteractions" on the extractor runtime configuration and are configured in the
checkAuthentication section of the robot template. More on this in the "Configuration" section below.
If present, "Check Authentication" runs before each input, if this function throws an error it will prompt the browser to re-execute the "Auth Interactions" before performing the data extraction.
Configuration
Robot
Any
robot can support Authentication. To allow a robot to support Authentication simply:
Add an
authenticationentry point to your
robot.yaml.
Behaves the same as
entryPoint
Can have a dynamic entrypoint by resolving parameters. For example:
shared/auth/${domain}
(Optional) Add a
checkAuthenticationentryPoint
Serves to validate login
Runs before each extraction
Supports Dynamic entry points
Example:
Below is an example
robot.yaml file that supports authentication
proxy: zone: USA type: DATA_CENTER honorRobots: false schema: product/details parameters: - store - country - domain entryPoint: product/search pathTemplate: product/${store[0:1]}/${store}/${country}/details authentication: shared/auth/action checkAuthentication: shared/checkAuth/action
Below is an example authentication entry point
--- async function implementation ( inputs, parameters, context, dependencies ) { const { _credentials } = inputs; const credentials = _credentials || {}; await dependencies.gotoLogin({}); await dependencies.preLogin(credentials); await dependencies.doLogin(credentials); await dependencies.postLogin(credentials); console.log('Logged in!'); } module.exports = { parameters: [ { name: 'domain', description: '', optional: false } ], inputs: [ { name: '_credentials', description: '', type: 'string', optional: false } ], dependencies: { gotoLogin: 'action:shared/auth/gotoLogin', preLogin: 'action:shared/auth/preLogin', postLogin: 'action:shared/auth/postLogin', doLogin: 'action:shared/auth/doLogin' }, path: './domains/${domain[0:2]}/${domain}/authenticate', implementation }; ---
Below is an example
checkAuthentication entryPoint:
--- async function implementation ( inputs, parameters, context, dependencies ) { const { url } = inputs; const { loggedInSelector } = parameters; await dependencies.goto({ url }); await context.waitForSelector(loggedInSelector); console.log('Logged in!'); } module.exports = { parameters: [ { name: 'domain', description: '', optional: false }, { name: 'loggedInSelector' } ], inputs: [ { name: 'url', description: '', type: 'string', optional: false } ], dependencies: { goto: 'action:shared/goto' }, path: '../auth/domains/${domain[0:2]}/${domain}/checkAuth', implementation }; ---
Extractor (extractor.yaml)
Just because a robot supports Authentication does not mean that every Extractor that implements that robot needs it. For this reason Authentication is opt-in.
To turn an existing extractor into an authenticated one simply:
Add
authenticated: trueto the
extractor.yaml
Re-run the
import-io extractor:newscaffold command with the
--authflag to generate the needed dependencies
Fill out the parameters and train the generated files as usual
Fill out the
credentials.yamlfile in the same directory containing the
extractor.yaml, and fill out as necessary (more on Credentials files below)
Credentials
Credentials used to log in to a target website can be stored in
credentials.yaml file.
When deploying, the credentials object is safely encrypted and stored by import.io. These credentials are passed into the
input on an action as the key
_credentials at runtime.
For security reasons it is recommended that
credentials.yaml files be gitignored
Example:
Below is an example
credentials.yaml file.
In the example the default credentials are
username: [email protected] and
password: meep.
Branch specific credentials can be stored in the
branches section. In the example, the
dev branch references different credentials than the default ones. If no branch specific credentials are specified,
default will be used.
--- default: username: [email protected] password: meep branches: dev: username: [email protected] password: otherPassword123
Scaffolding
When creating a new robot using the
import-io robot:new command,
--authentication and
--checkAuthentication flags can be provided to point to the respective entry points.
When creating a new extractor using the
import-io extractor:new command
--auth flag can be provided which will scaffold out the needed dependencies and add a
credentials.yaml file to the extractor directory.
Testing
For testing Authenticated Extractors locally the
import-io extractor:run:[local or remote] commands are recommended.
The
extractor:run commands will sequentially execute the following entrypoints for :
checkAuthentication
authentication(if
checkAuthenticationfails)
entryPoint
By default browser sessions are cached for 15 minutes. You can clear your browser state by running
import-io cache:clear
Documenation for these commands can be found by clicking here
Deploying
Authenticated Extractors can be deployed to SaaS or Workbench, though the restrictions of doing so slightly differ
Workbench
Deploying an Authenticated Extractor as a Source to workbench using the
import-io source:deploy command requires:
Valid User Token configured in developer environment
User Token must belong to the Organization you are attempting to deploy to
defaultcredentials will only be used. Credentials management in Workbench is coming soon
Running on Workbench
It is important to note that in order to run an Authenticated Extractor on Workbench you must first have the "Legacy Platform Id" saved on the Organization. This ID is used for security purposes to validate that the extractor is authorized to log in as the saved user on the extractor and perform the web automation it requires for data extraction. | https://docs.import.io/workbench/current/extractor-studio/authextractors.html | 2021-06-13T03:09:02 | CC-MAIN-2021-25 | 1623487598213.5 | [] | docs.import.io |
A Compound is a set of data that describes the properties of the substance whose behavior is to be simulated. These properties are defined within the building block Compound. For each project, several compounds may be defined. The compounds defined can be saved as a templates and then be shared among several projects and users.
To create a new compound, do one of the following:
Click on Compound in the Create New Building Blocks Group of the Modeling & Simulation Tab
Right mouse click on Compounds in the Building Block Explorer and select Add Compound...
Use the short cut Ctrl+Alt+C
A dialog will open, where the properties of the compound can be defined. The compound is initialized by giving it a Name in the respective input field. The name is used to identify the substance when its parameters are saved in the project and/or as a template. The properties of the compound can then be set or changed:
The Create Compound building block is subdivided into three tabs: Basic Physico-chemistry, ADME Properties, and Advanced Properties.
The first checkbox one can define the compound as being a small or a large molecule, such as a protein. If Is small molecule is de-selected, the permeability for passive diffusion into blood cells and into the intracellular space of the organs as well as the intestinal permeability will be set to zero. If the drug is a small molecule and is used in a Model for proteins and large molecules, the drug will not enter the endosomal space (see Modeling of Proteins).
The basic physico-chemical properties have to be specified in the Basic Physico-chemistry tab. Drug properties in many cases, have numerous values, determined using various methods or assays, the following are available (logMA, logP, clogP for lipophilicity). You are able to specify several alternative values. Later, in the simulation, you can choose the most appropriate value from the list.
Click Add at the end of a row.
Enter the alternative name
If desired, enter a short description in the respective input field
Click OK
Click
Click Yes
If several alternative values have been defined, you can select a default one by ticking the check box. When setting up the simulation, a value set as default will be ranked first. The alternative values can still be selected, if desired.
Please note that a value set as default cannot be deleted. In order to delete the value, define another default value.
Lipophilicity
As lipophilicity input, the partition coefficient between lipid membranes and water, i.e. the membrane affinity (logMA) is recommended. Alternatively, other lipophilicity values (e.g. logP, clogP) can be used, but in this case the quality of the simulation results might be affected. The type of lipophilicity measurement can be described in the first column (experiment).
Lipids in organ tissue are predominantly present in the form of phospholipid membranes. The best descriptor for lipophilicity is the partition coefficient between lipid membranes and water, as determined at physiological pH [43]. This is called membrane affinity and the value to be entered is the logMA. It is recommended to use these membrane affinities as input parameters for PK-Sim®. With their use, it is very likely that specific organ and intestinal permeability coefficients are obtained that require no or only marginal adjustment.
If the membrane affinity is not available, other lipophilicity values can be used as surrogates. The membrane/water partition coefficient is predominantly affected by two contributions. A real lipophilicity, which describes the partitioning into the lipid core of a membrane, and the interaction between a molecule and the phospholipid head groups. Particularly for charged substances this can lead to large differences between membrane affinity and other lipophilicity descriptors. A common observation is that membrane affinity is much less pH dependent than e.g. logD [21].
For this reason it is recommended to use a lipophilicity value for the neutral form, e.g. logP, as a replacement for membrane affinity if membrane affinity is not available. A reasonable variation around the logP value should be allowed since this parameter is not 1:1 correlated with membrane affinity.
Fraction Unbound (plasma, reference value)
The free fraction of drug in plasma (fu) is a mixed parameter depending on both the species and the drug. Thus, it might be necessary to define several values for one compound, namely one for each species to be simulated. The respective species can be selected in the Species column from the drop-down menu.
Later, during the create simulation process, the appropriate value can be selected from the alternatives defined here.
In the uppermost row of this field the user is asked to decide whether the drug is predominantly bound to either albumin or alpha1-acid glycoprotein. Depending on the predominant binding partner in plasma, the corresponding ontogeny function underlying PK-Sim® will be used for scaling the plasma protein binding in children. If this information is not available or needed, you can also select unknown and the reference value selected in the simulation will be used irrespective of the age of the individual.
In order to modify the fraction unbound as a function of disease please use the Plasma protein scale factor defined in the Individual building block. With the help of this factor, the fraction of drug bound to either protein can be scaled up or down. The resulting fraction unbound parameter used in the simulation can be found in the list of parameters of the Simulation under the header Distribution.
If the fraction unbound is known for one species, e.g. rat, but unknown for another one, e.g. the dog, it is technically possible to simulate pharmacokinetics in the dog using the fraction unbound defined for the rat. In other words, PK-Sim® does not judge the consistence of the combination of the species and the fraction unbound. However, in this case the value should only be considered as a best guess and a reasonable variation around the fu values should be allowed.
Similarly, for the scaling of pharmacokinetics from one species to another, make sure that not only the building block Individual is replaced but also mixed parameters such as fraction unbound in plasma and clearance pathways and/or expression data are changed appropriately.
Molweight
In the first line the molecular weight (MW) of the substance is specified. For substances containing halogen atoms the number of these atoms should chosen from the drop down menu that can be opened next to the Has Halogens field. This input is used to calculate an effective molecular weight, which is needed to estimate permeability values. It takes into account the small contribution of halogens to the molecular volume in relation to their weight. After the nature and number of halogens have been entered, the effective molecular weight is calculated automatically.
Even though the property determining the diffusion coefficient is the molecular volume rather than the weight, only the latter is commonly available and has therefore been chosen as easily accessible input parameter. However, in some cases this leads to inaccurate results, particularly since halogen atoms have a much smaller volume than what would be expected from their weight. Therefore, for substances containing such atoms “effective molecular weights” based on the following corrections are used (N = number of atoms, CF = correction factor): Effective Molecular Weight with CF = 17 for fluorine, CF = 22 for chloride, CF = 62 for bromine, and CF = 98 for iodine (see [93]).
Compound type / pka
The type of compound: neutral, base, or acid. In case the compound is a base or an acid choose either Base or Acid from the drop-down menu. You will then be able to specify the respective pka(s). Up to three pka values can be specified.
pka values always refer to the pka value of the acidic form of the compound. The compound type defines whether the pka value refers to the uncharged acid "HA" (= type acid; the compound is charged when it dissociates to H+ and A-) or to the conjugated acid of a base "BH+" (= type base; the compound is uncharged when it dissociates to H+ und B). In other words, the compound type always refers to the uncharged form of the molecule.
The pka values are used for the calculation of pH-dependent changes in solubility in the gastrointestinal tract. Furthermore, when using the distribution model (see Creating new simulations in PK-Sim®) of Rodgers and Rowland or the model of Schmitt the compound type is a basic parameter for calculating the partition coefficients. It is furthermore used by the two charge-dependent methods of Schmitt to calculate the permeability of the barrier between interstitial and cellular space.
Solubility
The solubility of the compound (in the intestine): The solubility can be specified together with the type of measurement or the medium used (first column, Experiment). The corresponding unit can be chosen from the drop-down menu in the second column (Solubility at Ref-pH). For charged compounds, the pH value at which the solubility of the compound was measured should be given in the third column (Ref-pH). In the fourth column, the Solubility gain per charge can be modified, which defines the factor by which the solubility increases with each ionization step. In order to calculate the charge of the molecule, the fraction of each microspecies is calculated according to the Henderson-Hasselbalch equation for a given pH. This is done across the entire pH-range such that the fractions are used to calculate the probability with which a molecule is in a certain ionization state. Based on this information, the pH-dependent solubility of molecules with one or more ionizable groups is calculated. By clicking on Show Graph, the pH-dependent solubility across the whole pH range calculated based on the experimental solubility at the defined pH is shown. For neutral compounds the input fields Ref-pH and Solubility gain per charge and the graph are irrelevant.
In the simulation, the intestinal solubility can be displayed for each segment based on the inputs made here and the pH values in the gastro-intestinal tract of the individual used in the simulation.
The solubility of the compound is only needed for the oral administration route. Additionally, it can be taken into account if e.g. a Noyes-Whitney dissolution is assumed for other routes of administration such as intramuscular or subcutaneous drug administration. However, for this purpose, the dissolution function has to be defined in MoBi®.
First estimates can be made using water solubility. However, especially for lipophilic compounds this value might underestimate the solubility in the intestine so that it is better to use a value obtained under bio-relevant conditions (e.g. in Fasted State Simulated Intestinal Fluid, FaSSIF). If different values are available for one compound (e.g. in FaSSIF and in Fed State Simulated Intestinal Fluid, FeSSIF), several alternative solubility values can specified and the appropriate value can then chosen in the Simulation.
Intestinal solubility can also be defined as a linear interpolation of measured (pH, Solubility) data pairs.
:arrow_right:
After having defined the basic physico-chemical properties of the compound, processes known to be involved in its distribution and elimination can be specified in the ADME tab. The ADME tab is accessible either by clicking Next or by directly clicking on the respective tab in the Create Compound window.
Five kinds of processes can be defined in the ADME tab depending on the type of interaction between the compound and the biological entity influencing the pharmacokinetics of the drug in vivo:
Absorption
Distribution
Metabolism
Transport & Excretion
Inhibition
Induction
For each of these items one or more ADME processes can be defined in order to systematically collect all available information on absorption, degradation, transport and binding processes from e.g. in vitro assays and use this information to obtain specific kinetic rates used in the simulation.
A general workflow for defining a specific process in Protein Binding Partners, Metabolizing Enzymes, Total Hepatic Clearances, Transport Proteins, Renal Clearances, Biliary Clearances is as follows:
Right click on the biological process you want to add to (e.g. Metabolizing Enzymes in the Metabolism branch, Renal Clearances in the Transport & Excretion branch, …).
Click on the Add … command (e.g. Add Metabolizing Enzyme …).
Enter a name for the biological process you want to add.
Enter a name for the data source (e.g. in vitro assay, literature, laboratory results).
Select the process type from the list.
Enter the required input parameters (see tables below for an overview of the input parameters for each process type).
If physiological parameters are based on in vivo measurements, e.g. the intrinsic clearance, the respective species used in the experiment has to be selected.
Click OK.
After definition of the required parameters the specific clearance or kinetic rate constant used in the simulation is automatically calculated taking into account the parameters listed under Calculation parameters.
Specifying a value for Specific clearance, which is normally calculated automatically by PK-Sim®, will overwrite the original formula. This is indicated by the symbol . The formula can be reset by clicking on
After having defined the biological properties of the compound, you will have to link specific processes to enzymatic, transport, and binding settings defined for the selected individual/species in the Simulation. This is described in Select relevant biological processes.
In the following an overview of the process types is given that can be defined for the different biological properties including additional information on the required input parameters.
Calculation of Specific Intestinal Permeabilities
Within the PK-Sim® standard package, transcellular specific permeability of the intestinal wall is deduced from physico-chemical properties.
In addition to the calculated specific intestinal permeability, experimentally determined permeabilities, e.g. from Caco-2-cell permeability assays can be used. However, due to the large inter-laboratory variability in Caco-2 permeations, a proper calibration of the measured in vitro values and the calculated in silico permeabilities for a defined set of compounds is necessary. If experimentally determined values for intestinal permeabilities are available and the customized calibration method has been implemented in PK-Sim®, this option is then available in the drop-down menu in the Calculation methods window.
Specific Intestinal Permeability
Similarly, the specific intestinal permeability, i.e. the surface area-normalized transcellular permeability of the innermost layer of the intestinal wall, is calculated from the drugs´ lipophilicity and effective molecular weight. The paracellular pathway has been shown to have no impact on the accuracy of prediction of the fraction dose absorbed in humans [79] and is therefore not accounted for, i.e. the value for the paracellular specific permeability is not automatically calculated. However, the paracellular pathway can be included in the simulation, if desired. You will find the parameter Intestinal permeability (paracellular) in the simulation within the parameter group Permeability.
For acids and bases, the transcellular intestinal permeability can be dynamically calculated throughout the intestinal tract based on the pH within the intestinal segments. Per default it is assumed that the pH-effect on the intestinal permeability is already reflected by the measured membrane affinity used as input and thus, the specific transcellular permeability is constant over the whole intestine. However, this parameter can be adjusted manually, if desired. You will find the parameter Use pH- and pKa-dependent penalty factor for charged molecule fraction in the simulation within the parameter group Permeability.
In case more than one lipophilicity value has been specified all corresponding permeability values calculated are displayed in the drop down list that opens if you click on Show Values. Later, in the Simulation, you can select which lipophilicity value is to be used for the calculation of the specific intestinal permeability or you can select the manually entered specific intestinal permeability. It is possible to use experimentally determined intestinal permeabilities, e.g. taken from Caco2- cell permeation experiments, as input instead of the calculated permeabilities.
In contrast to the procedure for permeability of organ membranes, the relation between intestinal permeability and the molecular properties of the compound was generated using experimental fraction of dose absorbed values. It was optimized to provide the best prediction of total fraction absorbed (for details see [79]).
In the simulation parameters, the calculated specific intestinal permeability (transcellular) cannot be modified under the compound properties of the simulation. The appropriate simulation parameter can be found under the tree header "permeability". Please note that if the (calculated or manually entered) intestinal permeability (transcellular) is modified in the simulation, the permeability between the intracellular and interstitial space within the mucosa (
P (intracellular -> interstitial)) will also automatically be scaled by the same factor. Otherwise, a disproportion between in the permeability of the apical and basolateral side of the enterocytes could be produced, leading to an accumulation of drug in the enterocytes. Likewise, a factor between the calculated intestinal permeability (transcellular) and an optional manual entry will be calculated to scale the permeability of the basolateral side of the enterocytes (
P (intracellular -> interstitial)) appropriately.
If experimental values for intestinal permeability are available, e.g. from Caco2-cell permeability assays, a calibration of these in vitro values has to be performed for a defined set of compounds before they can be used as input parameters. This is due to the high inter-laboratory variability in absolute permeability values. In this calibration the fractions of dose absorbed of the set of substances are correlated with the measured permeabilities. For new compounds, the corresponding intestinal permeability used in PK- Sim® is automatically calculated based on the Caco2 permeability value input. If you require an expert calibration of a defined set of experimentally determined permeabilities derived from in vitro assays, please contact your PK-Sim® support ().
Partition coefficient calculation methods
Two parameters determine the rate and extent of passive distribution in the body: steady state organ-plasma partition coefficients as well as permeability surface area (PxSA) products of each organ.
The partition coefficients are calculated from the physico-chemical data of the compound currently active in the simulation.
How are model parameters predicted in PK-Sim®?
PBPK modeling requires many substance-specific parameters, which are usually unknown and rarely accessible directly. These include the organ/plasma partition coefficients, the permeability surface area products and intrinsic clearances. The difficulty in gathering this type of data is one of the major reasons that prevented a more widespread use of PBPK-modeling in the past. PK-Sim® addresses and solves this issue by including several published and proprietary methods for parameter deduction from physico-chemical data, which are easily experimentally accessible and are, in most cases, frequently determined during the course of drug development.
How are organ/plasma partition coefficients deduced from physico-chemical parameters?
Organ/plasma partition coefficients are based on the concept of partition coefficients between drug binding tissue constituents and water. These include lipid/water and protein/water partition coefficients. Several similar concepts for utilizing such partition coefficients and the composition of organ tissue to calculate the organ/plasma partition coefficients have been published recently (see [53],and [86]b for examples, an overview is given in [32]). Even though the idea is very similar in all cases, they deviate in the kind of parameters that they use. In PK-Sim® there are five ways to calculate the partition coefficients for the organs: The PK-Sim® standard model, which is described in more detail below, and the approaches developed by Rodgers & Rowland, Schmitt, Poulin & Theil, and Berezhkovskiy. The mechanistic equations for the different models are found in the respective literature ([53], [59], [62], [60], [61], [68], [54], [55], [52], [5]). In the PK-Sim® standard model the partition coefficients are calculated using the following equation:
with = volume fraction of water, lipid and protein, = lipid/water partition coefficient,
= protein/water partition coefficient,
= free fraction in plasma.
Partition coefficients are derived from input data as follows:
The value entered as Lipophilicity is directly used.
Calculated from Lipophilicity using a correlation determined experimentally by measuring the unspecific binding to different tissue protein fraction of various organs for a large set of diverse compounds.
Drug partitioning between plasma and red blood cells is calculated in analogous manner to
The only exceptions are the Schmitt model that additionally takes into account the amount of acidic and neutral phospholipids as well as neutral lipids, and the Rodgers & Rowland model, if experimental data for blood-to-plasma concentration ratios (B:P) are available.
The equation for the calculation of Krbc in the Schmitt partition model is:
If a value for B:P is used in the Rodgers & Rowland model, Krbc is calculated as follows:
where HCT is the hematocrit and BPratio is the blood-to-plasma concentration ratio.
Five different methods for the calculation of organ-plasma partition coefficients are available in PK-Sim®. No general rules have emerged to determine which distribution model is best suited based on knowledge about the substance properties. However, some trends are contained within the different model foundations and assumptions as outlined below:
Cellular permeability calculation methods
The rates of permeation across the cell membranes (interstitial-cell barrier) depend on the permeability surface area (PxSA) products of each organ. The permeability values (the part of the PxSA-products that is substance-dependent) are proportional to the permeability of a phospholipid bilayer for the simulated substance. They are calculated from the physico-chemical data of the compound currently active in the simulation.
How are permeability surface-area (PxSA) products predicted in PK-Sim®?
As a first approximation it can be assumed that all mammalian lipid membranes have the same permeability for a given substance. Of course this it not strictly true, because permeability depends on the composition of a membrane; the types of phospholipids and the content of cholesterol influence the rates with which a substance passes through the membrane [24] [9]. However, within the accuracy with which it is possible to estimate permeability from compound properties, it is permissible to make this simplifying assumption. Under these presumptions the PxSA-products are composed out of a compound specific term (permeability) and a species or physiology specific term (surface area).
Because it is difficult to determine PxSA-products or their two components explicitly, the calculation method incorporated into PK-Sim® is based on the following procedure [36]:
First, PxSA-products were previously determined by fitting simulations to experimental concentration-time curves for the different organs. Secondly, such pinned values are scaled by the organ volume to take the change of surface area, e.g. from species to species, into account. Furthermore, it is assumed that permeability is proportional to the partition coefficient and the diffusion coefficient, the latter of which depends.
There are three different methods available in PK-Sim® to calculate the permeability parameters for the barriers between interstitial space and intracellular space which can be chosen from the drop-down menu:
Specific organ permeability
The specific organ permeability, i.e. the organ permeability normalized to the surface area, represents the part of the permeability times surface area (PxSA)- products that is substance-dependent and they are proportional to the permeability of a phospholipid bilayer for the simulated substance. They are calculated from the physico-chemical data of the compound, namely the lipophilicity and the effective molecular weight. If different lipophilicity values have been specified several permeability values based on these alternative values are displayed in the drop down list that opens if you click on Show Values. If available, further permeability values can be entered manually. You can later chose the lipophilicity value that is to be used in the Simulation from the values specified here.
As a first approximation it can be assumed that all mammalian lipid membranes have the same permeability for a given substance. Of course, this is not exactly true because organ permeability depends on the composition of the membrane. The types of phospholipids and the content of cholesterol influence the rates with which a substance passes through the membrane [24] [9]. However, within the accuracy with which it is possible to estimate the permeability from compound properties, it is permissible to make this simplifying assumption. Under these presumptions, the organ PxSA-products are composed out of a compound specific term (permeability) and a species or physiology specific term (surface area).
Because it is difficult to determine PxSA-products or their two components explicitly, the calculation method incorporated in PK- Sim® is based on the following procedure [36]:
First, PxSA-products were previously determined by fitting simulations to experimental concentration-time curves for the different organs. Second, such pinned values are scaled by the organ volume to take the change of surface area, e.g. from species to species, into account. Furthermore, it is assumed that permeability is proportional to the partition and diffusion coefficient, the latter of which depends on.
Specific Binding
Distribution of a compound is also influenced by specific binding to proteins either in plasma, interstitial or intracellular space. It is possible to define such specific protein binding processes in the Specific Binding/Protein Binding
Partners branch. When setting up a simulation the binding partner defined in the Compound Building Block has to be linked to the protein defined in the Individual Building Block as binding partner.
Protein Binding Partners
Sometimes enzymes that catalyze a metabolic degradation process can also bind the compound at a binding site different to the catalytically active center. It is therefore possible to link an enzyme defined in the individual/species to both a metabolic and a binding process when setting up a simulation.
Depending on the available experimental information you can either define process types in the Metabolizing Enzymes branch or the Total Hepatic Clearance branch. Please note that the calculation sheet offered for metabolizing enzymes refers to the liver in case of intrinsic clearance processes and in all other cases to the organ in which the respective enzyme is expressed. Using this calculation sheet, input values will be transferred to specific clearance values which are then used in the simulation. The sheet is only meant to help the user with the calculations. However, processes defined here may also be applied to other organs given that relevant expression levels are appropriately defined in the individual.
Metabolizing Enzymes
For calculation of in vivo clearance or Vmax values from in vitro values obtained from microsomal assays the content of the CYP enzyme defined as the process type has to be specified. The default value in PK-Sim® is 108 pmol/mg microsomal protein which is the CYP3A4 protein content in liver microsomes [63]. CYP enzyme contents in liver microsomes from this reference are shown when you move the mouse over the parameter Content of CYP proteins in liver microsomes. If you have defined other than these CYP enzymes, please insert the corresponding value in PK-Sim®.
Total Hepatic Clearance
Total hepatic clearance is a systemic process that does not have to be linked to properties defined in an individual/species when generating a simulation.
Drug transport across endothelial, epithelial or cellular barriers is responsible for distribution and renal or biliary elimination of a compound. Different experimental approaches are available either to determine rate constants or organ clearances. Depending on the experimental data available you can define different process types for your compound in the Transport Proteins branch, the Renal Clearances branch or the Biliary Clearance branch.
Transport Proteins
Renal Clearances
Kidney Plasma Clearance is a systemic process that does not have to be linked to properties defined in an individual/species in a simulation. In the case of the Glomerular Filtration, the individual/ species-dependent GFR represents a default value defined in the Individual building block.
Biliary Clearance
Biliary clearance is systemic process that does not have to be linked to properties defined for an individual/species when establishing a simulation.
A metabolite of a compound can be defined and used either as a "sink" or treated like any other compound. See How to set up a parent/metabolite simulation for details.
After the biological properties have been specified, further parameters can be defined in the Advanced Parameters tab. The Advanced Parameters tab can be opened either by clicking Next or by clicking on the Advanced Parameters tab.
Additional compound-related parameters can be defined here that are needed in case the particle dissolution function (see Formulations) or the model for proteins and large molecules (see Modeling of Proteins) are used in the simulation. In all other cases, the parameters defined in the Advanced Parameters tab will not be used in the simulation and can be left unchanged.
Particle dissolution
The particle dissolution function can be used for the simulation of the dissolution process of spherical particles administered orally and represents a dissolution function of the Noyes-Whitney type that is based on particle size [102].
In the Advanced Parameters tab the compound-related parameters needed for calculation of dissolution kinetics of spherical particles can be defined, namely:
• how the precipitated drug is treated (either as Soluble or Insoluble)
• the aqueous diffusion coefficient D
• the density of the drug in its solid form
• and the maximum size of particles that dissolves immediately
Further parameters such as the mean particle size and the particle size distribution, the number of bins and the diffusion layer thickness are considered to be related to the formulation and thus can be defined in the Formulation Building Block (see Formulations).
Model for proteins and large molecules
Four drug-related parameters which are used in the model for proteins and large molecules can be defined in the Advanced Parameters tab, namely:
• the solute radius, i.e. the hydrodynamic radius of the drug. The default value for the solute radius is estimated from the molecular weight defined in the Basic Physico-chemistry tab
• Kd (FcRn) in endosomal space: the dissociation constant for binding to FcRn in the acidic endosomal space. By default, this value is set to a very high value,
i.e. no binding is assumed.
Kd (FcRn) in plasma/interstitial: the dissociation constant for binding to FcRn in plasma and the interstitial space (neutral environment). By default, this value is set to a very high value, i.e. no binding is assumed. For monoclonal antibodies the binding to FcRn in neutral environment is generally very weak or not detectable. In this case the high default value for Kd (FcRn) in plasma/ interstitial space can be kept.
kass (FcRn): association rate constant for binding to FcRn for the acidic endosomal space as well as for plasma/interstitial space. The default value is a typical value for monoclonal antibodies and can usually be kept.
After all information about the compound properties has been entered, the Create Compound window can be closed by clicking OK . The new compound will appear in the Building Block Explorer view.
To set or change the properties of an existing compound:
Right mouse click on the respective compound in the Building Block Explorer
Select Edit...
or simply double click on the compound in the Building Block Explorer.
A window with the three tabs Basic Physico-chemistry, ADME Properties and Advanced Parameters will open. The properties can be set or changed appropriately. The changes can be saved by closing the window by clicking on .
To clone a compound in the project:
Right mouse click on the respective compound in the Building Block Explorer
Select Clone...
Enter an alternative name for the compound clone and enter a description, if desired.
Confirm and close the window by clicking OK
For each project, several compounds can be defined. They can be saved as templates and then be shared among several projects and users.
To save an existing compound as template:
Right mouse click on the respective compound in the Building Block Explorer
Select Save as Template...
In case a compound with the same name already exists, a warning appears and you have the following options:
Override: This action will override the existing template.
Save as: You can save the compound under a different name. In this case, you will be asked to Rename the new template.
Cancel: This action will abort the saving process.
As mentioned before, the compounds defined in a project can be saved as templates and then be shared among several projects and users.
To load an existing compound from the template database:
Right mouse click on Compounds in the Building Block Explorer
Select Load From Template...
Select the desired compound from the user templates
In case a compound with the same name already exists in the project, a warning pops up and you will have to Rename the compound that is to be loaded from template.
Click OK
The selected compound will appear in the Building Block Explorer view.
Compounds can also be directly loaded from the template database within a simulation.
To delete a compound from the project:
Right mouse click on the respective compound in the Building Block Explorer
Select Delete...
Confirm by clicking Yes
Please note that a compound that is used in any simulation of the project cannot be deleted. | https://docs.open-systems-pharmacology.org/working-with-pk-sim/pk-sim-documentation/pk-sim-compounds-definition-and-work-flow | 2021-06-13T02:07:12 | CC-MAIN-2021-25 | 1623487598213.5 | [] | docs.open-systems-pharmacology.org |
Version: 2.11
Navigation
#Introduction
The navigation section allows you to configure what information is visible on the header and footer navigation bars of your website.
#How to
#How to create your navigation structure
Creating the navigation structure is done by dragging and dropping. Simply create a new menu item and then drag it to the desired place. You can move items inside one another to create a tree structure and drag items up and down to create a hierarchy.
When you have more than one layer, the property to the far left will appear as the main item and the indented items become sub-elements, as in the following example.
What you'll see in the dashboard:
How it will appear on your storefront:
#How to manage navigation items
You can easily view, edit, or remove menu items using the icons on the right side of the sliders. | https://docs.saleor.io/docs/dashboard/configuration/navigation/ | 2021-06-13T02:21:48 | CC-MAIN-2021-25 | 1623487598213.5 | [array(['/assets/images/config-navigation-example-9ff2c1343dc7d3f8dd8601ab02953c33.jpeg',
'Navigation displayed on the site'], dtype=object)
array(['/assets/images/config-navigation-setup-8f794a128177ff7608265192d78ac8aa.jpeg',
'Navigation configuration'], dtype=object)
array(['/assets/images/config-navigation-example-footer-716c6200269ef59b0ebf2d671508836b.jpeg',
"Navigation displayed in the site's footer"], dtype=object) ] | docs.saleor.io |
The Comments tab contains a running list of comments, related to the Network, its devices, or other components. The comments are logged as they are entered. Each comment includes a header with the author's name, and the date the comment was created. Comments are separated by a series of dashed lines.
To create a comment,
On the Comments tab, click the New
icon. The New Comment dialog window opens.
Enter comments. The Enter key can be used to create paragraph breaks while you are entering your comments.
Click OK. The New Comments window closes. Each new comment is added at the top.
For each additional comment, repeat steps 1-3.
Click Close when you are finished entering comments. | https://docs.vmware.com/en/VMware-Smart-Assurance/10.1.3/ncm-online-help-1013/GUID-B569937D-3FD6-4DFF-AA6C-AA2FEB92571A.html | 2021-06-13T02:48:30 | CC-MAIN-2021-25 | 1623487598213.5 | [array(['images/GUID-EEC4C696-CFE6-45D5-BCDB-858A3097A0C9-low.png',
'attachnetworks'], dtype=object) ] | docs.vmware.com |
Pass your actual test with our Huawei H19-315 training material at first attempt
Last Updated: Jun 09,.
We provide the most up to date and accurate H19-315 questions and answers which are the best for clearing the actual test. Instantly download of the Huawei HCPA-Transmission&Access (Huawei Certified Pre-sales Associate-Transmission&Access) exam practice torrent is available for all of you. 100% pass is our guarantee of H19-31519-315 actual test that can prove a great deal about your professional ability, we are here to introduce our Huawei-certification H19-315 practice torrent to you. With our heartfelt sincerity, we want to help you get acquainted with our H19-315 exam vce. The introduction is mentioned as follows.
Our H19-315 latest vce team with information and questions based on real knowledge the exam required for candidates. All these useful materials ascribe to the hardworking of our professional experts. They not only are professional experts dedicated to this H19-315 training material painstakingly but pooling ideals from various channels like examiners, former candidates and buyers. To make the H19-315 actual questions more perfect, they wrote our H19-315 prep training with perfect arrangement and scientific compilation of messages, so you do not need to plunge into other numerous materials to find the perfect one anymore. They will offer you the best help with our H19-315 questions & answers.
We offer three versions of H19-315 practice pdf for you and help you give scope to your initiative according to your taste and preference. Tens of thousands of candidates have fostered learning abilities by using our H19-315 updated torrent. Let us get to know the three versions of we have developed three versions of H19-315 training vce for your reference.
The PDF version has a large number of actual questions, and allows you to take notes when met with difficulties to notice the misunderstanding in the process of reviewing. The APP version of Huawei-certification H19-31519-315 free pdf maybe too large to afford by themselves, which is superfluous worry in reality. Our H19-315 exam training is of high quality and accuracy accompanied with desirable prices which is exactly affordable to everyone. And we offer some discounts at intervals, is not that amazing?
As online products, our H19-315 : HCPA-Transmission&Access (Huawei Certified Pre-sales Associate-Transmission&Access) useful training can be obtained immediately after you placing your order. It is convenient to get. Although you cannot touch them, but we offer free demos before you really choose our three versions of H19-315 practice materials. Transcending over distance limitations, you do not need to wait for delivery or tiresome to buy in physical store but can begin your journey as soon as possible. We promise that once you have experience of our H19-315 practice materials once, you will be thankful all lifetime long for the benefits it may bring in the future.so our Huawei H19-315 practice guide are not harmful to the detriment of your personal interests but full of benefits for you.
Clark
Eli
Harlan
Joyce
Matthew
Herbert
Exam4Docs is the world's largest certification preparation company with 99.6% Pass Rate History from 69850+ Satisfied Customers in 148 Countries.
Over 69850+ Satisfied Customers | https://www.exam4docs.com/hcpa-transmission-access-huawei-certified-pre-sales-associate-transmission-access-accurate-pdf-10635.html | 2021-06-13T01:52:18 | CC-MAIN-2021-25 | 1623487598213.5 | [] | www.exam4docs.com |
AWS Systems Manager Inventory AWS accounts.
If the pre-configured metadata types collected by Systems Manager Inventory don't meet your needs, then you can create custom inventory. Custom inventory is simply a JSON file with information that you provide and add to the managed instance in a specific directory. When Systems Manager Inventory collects data, it captures this custom inventory data. For example, if you run a large datacenter, you can specify the rack location of each of your servers as custom inventory. You can then view the rack space data when you view other inventory data.
Systems Manager Inventory collects only metadata from your managed instances. Inventory doesn't access proprietary information or data.
The following table lists the types of metadata that you can collect with Systems Manager Inventory. The table also lists the instances you can collect inventory information from and the collection intervals you can specify. Systems Manager console on the Inventory page, which includes several predefined cards to help you query the data.
Inventory cards automatically filter out Amazon EC2 managed instances with a state of Terminated and Stopped. For on-premises managed instances, Inventory cards automatically filter out instances with a state of Terminated.
If you create a resource data sync to synchronize and store all of your data in a single Amazon S3 bucket, then you can drill down into the data on the Inventory Detailed View page. For more information, see Querying inventory data from multiple Regions and accounts.
EventBridge support
This Systems Manager capability is supported as an event type in Amazon EventBridge rules. For information, see Monitoring Systems Manager events with Amazon EventBridge and Reference: Amazon EventBridge event patterns and types for Systems Manager.
Contents
- Learn more about Systems Manager Inventory
- Setting up Systems Manager Inventory
- Configuring inventory collection
- Working with Systems Manager inventory data
- Working with custom inventory
- Viewing inventory history and change tracking
- Systems Manager Inventory walkthroughs
- Troubleshooting problems with Systems Manager Inventory | https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-inventory.html | 2021-06-13T03:36:05 | CC-MAIN-2021-25 | 1623487598213.5 | [array(['images/inventory-cards.png',
'Systems Manager Inventory cards in the Systems Manager console.'],
dtype=object) ] | docs.aws.amazon.com |
PivotGridControl.GroupGeneratorStyle Property
Gets or sets a style that contains settings common to all field groups generated using different templates. This is a dependency property.
Namespace: DevExpress.Xpf.PivotGrid
Assembly: DevExpress.Xpf.PivotGrid.v21.1.dll
Declaration
public Style GroupGeneratorStyle { get; set; }
Public Property GroupGeneratorStyle As Style
Property Value
Remarks
The PivotGridControl can be bound to a collection of objects containing field group settings, described in a Model or ViewModel. The Pivot Grid generates groups based on field templates. Using a single template, you can create an unlimited number of groups in an unlimited number of Pivot Grid controls.
To specify settings common to all groups generated using different templates, create a style and assign it to the GroupGeneratorStyle property.
To learn more, see Binding to a Collection of Groups.
See Also
Feedback | https://docs.devexpress.com/WPF/DevExpress.Xpf.PivotGrid.PivotGridControl.GroupGeneratorStyle | 2021-06-13T03:44:23 | CC-MAIN-2021-25 | 1623487598213.5 | [] | docs.devexpress.com |
Permission Groups
#Introduction
Permission groups allow you to create and manage groups of staff members with the same permissions. You can think of them as different roles that staff members fulfill. Users do not have individual permission settings, user's effective permissions are the sum of all of the permissions granted by the groups they are a member of.
#Creating and managing permission groups
To create a group, use the Permission Groups section of the Configuration tab.
When creating or modifying a group you can select which permissions you want to give it.
note
The "manage staff" permission allows users to manage the permission groups. You can only manage groups that grant a subset of your effective permissions. You cannot assign permissions you do not currently hold. This is a security precaution that prevents users from escalating their permissions beyond what was explicitly granted. | https://docs.saleor.io/docs/dashboard/configuration/permission-groups/ | 2021-06-13T02:06:41 | CC-MAIN-2021-25 | 1623487598213.5 | [array(['/assets/images/config-permission-group-list-d24f0edd7876362c9c38845f12a46165.png',
'Permission group management'], dtype=object)
array(['/assets/images/config-permission-group-details-05514358427ee612adfed23f3b00e3c3.png',
'Permission group details'], dtype=object) ] | docs.saleor.io |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.