content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Chart of Accounts
Сhart of accounts is a complete list of accounts your company can use to record all its business transactions in Codejig ERP and to create financial statements.
Codejig ERP provides default chart of accounts for individual countries based on country-specific general guidelines. If you prefer, you can customize it to meet your business requirements or even create new accounts as needed.
You cannot delete any predefined categories, subcategories, groups or active accounts. But you can reorganize the structure of the chart of accounts, change details of each account, and rename each component/element of the chart.
For more information how to manage the chart of accounts, see Managing Chart of Accounts.
To go to the Chart of accounts:
1. On the Codejig ERP Main menu, click the Settings tab.
2. Under the Settings tab, click Chart of accounts checking.
More information
Manage Chart of Accounts
Chart of Accounts Structure | https://docs.codejig.com/en/entity2305843015656313960/view/4611686018427396560 | 2021-07-24T07:07:23 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.codejig.com |
Business Network Operator project planning
When planning a Corda deployment as a Business Network Operator, there are several considerations:
- Deployment environments
- Notary compatibility
- HSM compatibility
- Database compatibility
- Corda Enterprise Network Manager deployment
The Business Network Operator is responsible for all major components of the Corda network. In most enterprise deployments of Corda this includes: Nodes, an HA notary cluster, an HA Corda Firewall, an HSM, the certificate hierarchy of the network, identity manager, and network map.
This likely includes a Corda Enterprise Network Manager as well as Corda Enterprise.
Deployment environments
Business Network Operators will need several deployments of Corda Enterprise, at least including:
- A development environment including minimal network infrastructure.
- A testing environment including a basic network, without HA notary, Corda Firewall, or HSMs.
- A UAT environment, that includes the full network infrastructure, with a shared HSM, and HA Corda Firewall.
- The production environment, including an HA notary cluster, HA Corda Firewalls on all nodes, HSMs, and network services.
Node sizing and databases.
All Corda Nodes have a database. A range of third-party databases are supported by Corda, shown in the following table: | https://docs.corda.net/docs/corda-enterprise/4.7/operations/project-planner/network-operators.html | 2021-07-24T08:06:26 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.corda.net |
Dashboard
The Dialog Manager dashboard provides analytics on the performance of the Sofi Virtual Agent.
Chat logs
The dashboard enables administrators to review the performance of the natural language model model by reviewing the chat logs generated by users interacting with the Sofi Virtual Agent.
Chat Log
The Chat log show the following information:
- Utterance - the initial utterance that the user entered.
- Response - the Initial response from Sofi
- Intent Match - the highest scored Entry Point
- User - the name of the user that entered the utterance
- Score - the matching score provided by the NLU model.
- Match - If the utterance successfully matched a Entry Point - ie had a score above the minimum threshold (80% by default).
A String based filter can be set (top right of Chat log list) to filter the list of utterances.
Chat Logs
The chat logs only show the initial utterance that the user entered. It does not list the entire conversation. This information can be requested by submitting a request to Servicely support.
Filters
There are various gauges (Filter controls) that control the filter that is applied to the chat log. By selecting these filter controls, you create Filter that is visualised by the Filter Breadcrumb on the top left of the dashboard.
To reset the filter, click the Reset Filter (section of the Breadcrumb).
Dashboard Filter Breadcrumb
- Word cloud - A word cloud highlighting the terms most frequently used keywords from utterances
- Threshold - Percentage of utterances above or below the configured threshold
- Intent cloud A word cloud highlighting the Intents most frequently matched
- User cloud - A word cloud highlighting the most active users
- Date Range - located on the top right of the dashboard is a drop down list that provides various options to filter results based on date range
Gauges
In addition to the Chat Log there are a number of other gauges that provide insights into the performance and usage of the Virtual Agent.
- Term breakout - A pie chart showing the relative frequency of words used in utterances
- Chat interactions/period - The number of chat conversations initiated per period
- Intent breakout - A pie chart showing the relative frequency of matched intents / entry points
- User breakout - A pie chart showing the relative usage by user
- Unique users/period - The number of unique users initiating chats per period
Updated over 1 year ago | https://docs.sofi.ai/docs/dashboard | 2021-07-24T07:27:59 | CC-MAIN-2021-31 | 1627046150134.86 | [array(['https://files.readme.io/21834b5-DM_Dashboard_Overview.jpeg',
'DM_Dashboard_Overview.jpeg'], dtype=object)
array(['https://files.readme.io/21834b5-DM_Dashboard_Overview.jpeg',
'Click to close...'], dtype=object)
array(['https://files.readme.io/9779926-DM_Dashboard_Chatlog.jpeg',
'DM_Dashboard_Chatlog.jpeg'], dtype=object)
array(['https://files.readme.io/9779926-DM_Dashboard_Chatlog.jpeg',
'Click to close...'], dtype=object)
array(['https://files.readme.io/132a59c-DM_Dashboard_Filter.jpeg',
'DM_Dashboard_Filter.jpeg'], dtype=object)
array(['https://files.readme.io/132a59c-DM_Dashboard_Filter.jpeg',
'Click to close...'], dtype=object) ] | docs.sofi.ai |
IT Security Policy
PC Docs IT SECURITY POLICY – May 2018
System security
Server:
Our server is protected with a complex password and is situated in a secure server cabinet within our office.
User logins:
Staff and contractors have user accounts and log in (to access documents in shared folders on our server) using individual, complex passwords. Each user has an email account that has a separate password. No passwords are shared outside the organisation. Any accounts no longer in use are disabled.
Internet connection:
Our internet connection is secured via a Draytek firewall, and individual firewalls on our server and on our PCs, as well as via our anti-virus software.
Protection against viruses and malware:
We maintain up to date anti-virus software and our server and PCs are maintained by our IT support provider.
Keeping devices and software up to date:
PCs are kept up to date and PCs are automatically updated with Microsoft security patches.
Data security
Documents containing personal data:
Documents containing information about case studies are encrypted with passwords (with passwords sent in separate emails). Documents containing information about staff are in a restricted-access folder on our server (only accessible by the Managing Director).
Deleted files:
When files are deleted from our server, it is not possible to retrieve them…
On-Line Back Up:
All data is stored in an encrypted state within the UK. Our three replicated data centres located, London, Leeds and Manchester. All our DC’s are owned by Equinix (used to be Telicity) who have the following accreditation’s.
Quality management
All of our data centres have achieved the ISO 9001ors of an independent, external audit.
Environmental Management
All of our data centres have achieved certification to ISO 14001, the environmental management system standard. ISO 14001 is an internationally-recognized accreditation for organisations that demonstrate superior environmental management. The certificate highlights our ongoing commitment to both maximize the energy efficiency of its existing data centre estate and develop innovative new facilities.
Occupational Health and Safety Management We have achieved certification to OHSAS 18001, the assessment specification for occupational health and safety management systems. This validates companies that show excellence in health and safety performance, and demonstrates the leadership to reduce risk and create an injury-free workplace.
Business Continuity Management
Our UK.
Energy Management
All data centres have achieved certification to ISO 50001. The purpose of this International Standard is to enable organisations to establish the systems and processes necessary to improve energy performance, including energy efficiency, use and consumption.
All emails are hosted by our IT Services provider which use SSL encryption for the communication of emails from the client (Outlook, Smart Phones etc.) to the hosted server when sending or receiving emails. Further encryption can be provided by the individual documents through Word or Excel depending on requirements.Contact us | https://www.pc-docs.co.uk/it-security-policy/ | 2021-07-24T08:34:00 | CC-MAIN-2021-31 | 1627046150134.86 | [] | www.pc-docs.co.uk |
Initializing a multiple node cluster (multiple datacenters)
A deployment scenario for a Cassandra cluster with multiple datacenters.
This topic contains information for deploying an Apache Cassandra™.
Prerequisites
- A good understanding of how Cassandra works. At minimum, be sure to read Understanding the architecture (especially the Data replication section) and the rack feature of Cassandra.
-.
- Determine a naming convention for each datacenter and rack. Examples: DC1, DC2 or 100, 200 / RAC1, RAC2 or R101, R102. Choose the name carefully; renaming a datacenter is not possible.
- The The cassandra.yaml configuration file, and property files such as cassandra-rackdc.properties, give you more configuration options. See the Configuration section for more information. datacenter.
- #Stops Cassandra
- Clear the data:
sudo rm -rf /var/lib/cassandra/*
Tarball installations:
- Stop Cassandra:
ps auwx | grep cassandra
sudo kill pid
- Clear the data:
sudo rm -rf install_location/data/*
-: | https://docs.datastax.com/en/cassandra-oss/3.0/cassandra/initialize/initMultipleDS.html | 2022-05-16T22:24:27 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.datastax.com |
- Configurable limits
- Failed authentication ban for Git and container registry
- Non-configurable limits
- Troubleshooting
Rate limits
Rate limiting is a common technique used to improve the security and durability of a web application.
For example, a simple script can make thousands of web requests per second. The requests could be:
- Malicious.
- Apathetic.
-.
Configurable limits
You can set these rate limits in the Admin Area of your instance:
- Import/Export rate limits
- Issues rate limits
- Notes rate limits
- Protected paths
- Raw endpoints rate limits
- User and IP rate limits
- Package registry rate limits
- Git LFS rate limits
- Files API rate limits
- Deprecated API rate limits
- GitLab Pages rate limits
You can set these rate limits using the Rails console:
Failed authentication ban for Git and container registry
GitLab returns HTTP status code
403 for 1 hour, if 30 failed authentication requests were received
in a 3-minute period from a single IP address. This applies only to combined:
- Git requests.
- Container registry (
/jwt/auth) requests.
This limit:
- Is reset by requests that authenticate successfully. For example, 29 failed authentication requests followed by 1 successful request, followed by 29 more failed authentication requests would not trigger a ban.
- Does not apply to JWT requests authenticated by
gitlab-ci-token.
- Is disabled by default.
No response headers are provided.
For configuration information, see Omnibus GitLab configuration options.
Non-configurable limits
Git operations using SSH
GitLab rate limits Git operations by user account and project. If a request from a user for a Git operation on a project exceeds the rate limit, GitLab drops further connection requests from that user for the project.
The rate limit applies at the Git command (plumbing) level. Each command has a rate limit of 600 per minute. For example:
git pushhas a rate limit of 600 per minute.
git pullhas its own rate limit of 600 per minute.
Because the same commands are shared by
git-upload-pack,
git pull, and
git clone, they share a rate limit.
Repository archives
Introduced in GitLab 12.9.
A rate limit for downloading repository archives is available. The limit.
Users sign up
There is a rate limit per IP address on the
/users/sign_up endpoint. This is to mitigate attempts to misuse the endpoint. For example, to mass
discover usernames or email addresses in use.
The rate limit is 20 calls per minute per IP address.
Update username
There is a rate limit on how frequently a username can be changed. This is enforced to mitigate misuse of the feature. For example, to mass discover which usernames are in use.
The rate limit is 10 calls per minute per signed-in user.
Username exists
There is a rate limit for the internal endpoint
/users/:username/exists, used upon sign up to check if a chosen username has already been taken.
This is to mitigate the risk of misuses, such as mass discovery of usernames in use.
The rate limit is 20 calls per minute per IP address.
Troubleshooting
Rack Attack is denylisting.
- Allowlist the load balancer’s IP addresses.
Reconfigure GitLab:
sudo gitlab-ctl reconfigure
Remove blocked IPs from Rack Attack with Redis
To remove a blocked IP:
Find the IPs that have been blocked in the production log:
grep "Rack_Attack" /var/log/gitlab/gitlab-rails/auth.log
Since the denylist is stored in Redis, you must open up
redis-cli:
/opt/gitlab/embedded/bin/redis-cli -s /var/opt/gitlab/redis/redis.socket
You can remove the block using the following syntax, replacing
<ip>with the actual IP that is denylisted:
del cache:gitlab:rack::attack:allow2ban:ban:<ip>
Confirm that the key with the IP no longer shows up:
keys *rack::attack*
By default, the
keys command is disabled.
- Optionally, add the IP to the allowlist to prevent it being denylisted again. | https://docs.gitlab.com/ee/security/rate_limits.html | 2022-05-16T22:43:42 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.gitlab.com |
Configure permission classifications
In this article you'll learn how to configure permissions classifications in Azure Active Directory (Azure AD). Permission classifications allow you to identify the impact that different permissions have according to your organization's policies and risk evaluations. For example, you can use permission classifications in consent policies to identify the set of permissions that users are allowed to consent to.
Currently, only the "Low impact" permission classification is supported. Only delegated permissions that don't require admin consent can be classified as "Low impact".
The minimum permissions needed to do basic sign in are
openid,
profile,
User.Read and
offline_access, which are all delegated permissions on the Microsoft Graph. With these permissions an app can read the full profile details of the signed-in user and can maintain this access even when the user is no longer using the app.
Prerequisites
To configure permission classifications, you need:
- An Azure account with an active subscription. Create an account for free.
- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
Manage permission classifications
Follow these steps to classify permissions using the Azure portal:
- Sign in to the Azure portal as a Global Administrator, Application Administrator, or Cloud Application Administrator
- Select Azure Active Directory > Enterprise applications > Consent and permissions > Permission classifications.
- Choose Add permissions to classify another permission as "Low impact".
- Select the API and then select the delegated permission(s).
In this example, we've classified the minimum set of permission required for single sign-on:
Next steps
To learn more:
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/configure-permission-classifications?tabs=azure-portal | 2022-05-16T23:22:48 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.microsoft.com |
. perform load balancing between two indexers:
For more information on load balancing, read "Set up load balancing".
Routing and filtering
In data routing, a forwarder routes events to specific hosts,,. In some cases, the intermediate forwarders also index the data..10, 7.1.4, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.1.5, 7.3.1, 7.3.2,, 7.3.3, 7.3.4, 7.3.5
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/Splunk/7.1.6/Forwarding/Forwarderdeploymenttopologies | 2022-05-16T22:11:20 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.splunk.com |
Starting Local Development
To start tinkering with the Talkdesk systems, you must create OAuth clients to get access to the Talkdesk APIs.
This OAuth client is only enough for local development.
If you want to make your app available on the Talkdesk AppConnect™ marketplace (the Talkdesk marketplace call center apps), you must take a look at how to integrate a partner app with Talkdesk by using the Events API (getting the OAuth credentials to access user data from other accounts and users).
Updated 6 days ago
Did this page help you? | https://docs.talkdesk.com/docs/starting-local-app-development | 2022-05-16T20:46:17 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.talkdesk.com |
history stack
Manipulate the history stack of one or more selected images.
🔗module controls
- selective copy…
- Copy parts of the history stack from the selected image. A dialog appears from which you may choose the items to copy from the history stack. If more than one image is selected, the history stack is taken from the image that was selected first. Double-click on a history item to copy that item only and immediately close the dialog.
- copy
- Copy the complete history stack from the selected image. If more than one image is selected, the history stack is taken from the image that was selected first.
Information relating to internal display encoding and mask management is considered unsafe to automatically copy to other images and will therefore not be copied when using this button.
The following modules are excluded from the copy operation:
-
-
-
-
-
-
- deprecated modules
You can override all of these exclusions by using “selective paste…” and choosing which modules to paste to the target image(s).
- compress history
- Compress the history stack of the selected image. If any module appears multiple times in the history stack, these occurrences will be compressed into a single step in the history. Beware: this action can not be undone!
- discard history
- Physically delete the history stack of the selected images. Beware: this action can not be undone!
- selective paste…
- Paste parts of a copied history stack onto all selected images. A dialog appears from which you may choose the items to paste from the source history stack.
- paste
- Paste all items of a copied history stack onto all selected images.
- mode
- This setting defines how the paste actions behave when applied to an image that already has a history stack. In simple terms the “overwrite” mode deletes the previous history stack before pasting, whereas “append” concatenates the two history stacks together.
A copied history stack can have multiple entries of the same module (with the same name or different names) and pasting behaves differently for these entries in append and overwrite modes.
In append mode, for each module in the copied history stack, if there is a module in the destination image with the same name it will be replaced. If there is no such module, a new instance will be created. In both cases the pasted instance is placed on top of the history stack. If a particular module appears multiple times in either history stack only the last occurrence of that module will be processed.
In overwrite mode the behavior is the same except that the history of the destination image is deleted before the paste operation commences. The “copy all”/“paste all” actions in this mode will precisely duplicate the copied history stack to the destination images (including any duplicate occurrences).
- Notes
- Automatic module presets are only added to an image when it is first opened in the darkroom or its history stack is discarded. If you use overwrite mode to paste history stack entries to images that haven’t previously been opened in the darkroom then the next time that image is opened in the darkroom, automatic presets will be applied to the image. It may therefore seem as if the “overwrite” mode did not accurately duplicate the existing history stack, but in this case, those automatic modules were added subsequently.
- The append mode allows you to later reconstruct your pre-existing history stack (because previous history items are retained in the stack of the destination image). However, in “overwrite” mode all previous edits are irrevocably lost.
- The mode setting is retained when you quit darktable – if you change it for a one-off copy and paste, make sure to change it back again.
- load sidecar file
- Open a dialog box which allows you to import the history stack from a selected XMP file. This copied history stack can then be pasted onto one or more images.
Images that were exported by darktable typically contain the full history stack if the file format supports embedded metadata (see the export module for details of this feature and its limitations). You can load an exported image as a sidecar file in the same way as you can with an XMP file. This feature allows you to recover all parameter settings if you have accidentally lost or overwritten the XMP file. All you need is the source image, typically a raw, and the exported file.
- write sidecar files
- Write XMP sidecar files for all selected images. The filename is generated by appending “.xmp” to the name of the underlying input file.
By default darktable generates and updates sidecar files automatically whenever you work on an image and change the history stack. You can disable automatic sidecar file generation in preferences > storage. This can be useful when you are running multiple versions of darktable (so that edits in each version do not conflict with one another) however, in general, disabling this feature is not recommended. | https://docs.darktable.org/usermanual/3.6/en/module-reference/utility-modules/lighttable/history-stack/ | 2022-05-16T22:44:39 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.darktable.org |
@AnnotationCollector(value: [ExternalizeMethods, ExternalizeVerifier]) @interface AutoExternalize
Class annotation used to assist in the creation of
Externalizable classes.
The
@AutoExternalize.*Which will create a class of the following form:
@AutoExternalizeclass Person { String first, last List favItems Date since }
class Person implements Externalizable { ... public void writeExternal(ObjectOutput out) throws IOException { out.writeObject(first) out.writeObject(last) out.writeObject(favItems) out.writeObject(since) } public void readExternal(ObjectInput oin) { first = oin.readObject() last = oin.readObject() favItems = oin.readObject() since = oin.readObject() } ... }
The
@AutoExternalize transform is implemented as a combination of the
@ExternalizeMethods and
@ExternalizeVerifier transforms. | https://docs.groovy-lang.org/docs/groovy-3.0.8/html/gapi/groovy/transform/AutoExternalize.html | 2022-05-16T22:46:16 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.groovy-lang.org |
Usage
dimension: field_name {
map_layer_name: name_of_map_layer
}
}
Basics
The
map_layer_name parameter lets you associate a dimension with a TopoJSON map layer. This lets users create map charts by charting the values in the dimension on the map layer. For example, to be able to chart data by US state, you could associate a dimension called “State” to the built-in map layer
us_states. You could also chart data in a dimension called “Neighborhood” to a custom map of New York City neighborhoods.
If you are using a custom TopoJSON map, you must specify the map layer in the LookML model using the
map_layer parameter.
Dimensions of
type: zipcode automatically receive a
map_layer_name of
us_zipcode_tabulation_areas.
Built-in map layers
Looker includes the following built-in map layers:
countries— Accepts full country names, ISO 3166-1 alpha-3 three-letter country codes, and ISO 3166-1 alpha-2 two-letter country codes. If your data includes ISO 3166-1 alpha-2 country codes, using
map_layer_namewith the
countriesmap is recommended to ensure that Looker interprets your data as country codes and not as state codes.
uk_postcode_areas— Accepts UK postcode areas (for example,
Lfor Liverpool,
RHfor Redhill, or
EHfor Edinburgh).
us_states— Accepts full state names and two-letter state abbreviations.
us_counties_fips— Works on string fields that are five-character FIPS county codes for a US county. This layer works only on the interactive map.
us_zipcode_tabulation_areas— Works on string fields that are five-character US zip codes. Dimensions of
type: zipcodeautomatically use the
us_zipcode_tabulation_areasmap layer.
Zip code regions are based on the 2010 zip code tabulation areas (ZCTAs), so this map layer does not include many zip codes, such as those assigned to P.O. boxes, that do not map directly to regions. | https://docs.looker.com/reference/field-params/map_layer_name | 2022-05-16T22:25:36 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.looker.com |
重要
The end-of-life date for this agent version is July 29, 2019. To update to the latest agent version, see Update the agent. For more information, see End-of-life policy.
Notes
Improvements
- Improves transaction breakdown and transaction trace views by filtering certain .NET Framework methods.
- Due to the performance impact, adding "NewRelic_BrowserTimingHeader" and "NewRelic_BrowserTimingFooter" to manually-instrument browser monitoring (RUM) has been deprecated. The recommended method for manual instrumentation is to use the New Relic .NET Agent API. See the documentation page: for details.
- Reduced the overhead of the agent through improved targeting of instrumented methods.
- Removes an erroneous error log message related to attempting to instrument RefEmit_InMemoryManifestModule.
Fixes
- Fixes a bug where sometimes thread profiles would not display properly after being collected successfully.
- Fixes an issue where System.Data.SqlClient.SqlConnection.Open() metric was being removed from traces making it more difficult for users to gain visibility into how many database connections each transaction in their app was opening.
- Fixes a bug where the agent wouldn't increment the total errors count for errors reported via the API that occurred outside of a transaction.
NuGet
- Fixes an issue where after deploying Windows Server monitor using the nuget package to Windows Azure the service will be in a stopped state. The Windows Server monitor will now start after deployment. Thanks Steven Kuhn for the contribution! | https://docs.newrelic.com/jp/docs/release-notes/agent-release-notes/net-release-notes/net-agent-2242180/ | 2022-05-16T21:59:52 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.newrelic.com |
How to capture warnings-7.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 1 item test_show_warnings.py . [100%] ============================= warnings summary ============================= test_show_warnings.py::test_one /home/sweet/project/test_show_warnings.py:5: UserWarning: api v1, should use functions from v2 warnings.warn(UserWarning("api v1, should use functions from v2")) -- Docs: ======================= 1 passed, 1 warning in 0.12s =======================
Controlling warnings¶
Similar to Python’s warning filter and
-W option flag, pytest provides
its own
-W flag to control which warnings are ignored, displayed, or turned into
errors. See the warning filter documentation for more
advanced use-cases.
This code sample shows how to treat any
UserWarning category class of warning
as an error:
$ 5 506... were emitted...).
The recwarn fixture automatically ensures to reset the warnings filter at the end of the test, so no global state is leaked.
Recording warnings¶
You can record raised warnings either using
pytest.warns() or with
the
recwarn fixture.
To record with
pytest.warns() without asserting anything about the warnings,
pass no arguments as the expected warning type and it will default to a generic Warning:
with pytest.warns().
Full API:
WarningsRecorder.
Additional use cases of warnings in tests¶
Here are some use cases involving warnings that often come up in tests, and suggestions on how to deal with them:
To ensure that at least one warning is emitted, use:
with pytest.warns(): ...
To ensure that no warnings are emitted, use:
with warnings.catch_warnings(): warnings.simplefilter("error") ...
To suppress warnings, use:
with warnings.catch_warnings(): warnings.simplefilter("ignore") ... /home/sweet/project.
Resource Warnings¶
Additional information of the source of a
ResourceWarning can be obtained when captured by pytest if
tracemalloc module is enabled.
One convenient way to enable
tracemalloc when running tests is to set the
PYTHONTRACEMALLOC to a large
enough number of frames (say
20, but that number is application dependent).
For more information, consult the Python Development Mode section in the Python documentation. | https://docs.pytest.org/en/latest/how-to/capture-warnings.html | 2022-05-16T22:08:19 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.pytest.org |
PrintingSystemBase.EditingFields Property
Provides access to the collection of fields whose content can be edited in Print Preview.
Namespace: DevExpress.XtraPrinting
Assembly: DevExpress.Printing.v20.2.Core.dll
Declaration
[Browsable(false)] public EditingFieldCollection EditingFields { get; }
<Browsable(False)> Public ReadOnly Property EditingFields As EditingFieldCollection
Property Value
Remarks
If a EditOptions.Enabled property of a control is set to true, its content becomes customizable in Print Preview. Each time such a control is rendered in Print Preview, a new appropriate EditingField instance is added to the EditingFields collection: TextEditingField for a label or its descendant, CheckEditingField for a check box, and the ImageEditingField.ImageSource for a picture box.
An editing field provides options corresponding to edit options of a control and indicates the current field value (EditingField.EditValue or CheckEditingField.CheckState) as well as a visual brick used to render this field in Print Preview (EditingField.Brick).
Changing a field’s value in Print Preview updates the corresponding property and each time this value is changed, the PrintingSystemBase.EditingFieldChanged event occurs.
For more information, see Content Editing in Print Preview. | https://docs.devexpress.com/CoreLibraries/DevExpress.XtraPrinting.PrintingSystemBase.EditingFields?v=20.2 | 2022-05-16T23:17:30 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.devexpress.com |
UX navigation models
This section will provide some of the key information around the UX portals that are available.
- Key portal layouts
- Claims information
- Field level security
- Data masking
- Configuration approval flows
- Hotkeys
Previous topic Extending and customizing portals Next topic Key portal layouts | https://docs.pega.com/pega-smart-claims-engine-user-guide/86/ux-navigation-models | 2022-05-16T21:24:48 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.pega.com |
Global CSS & JS gives you a new way to add custom CSS and Javascript to your Jira environment. With Global CSS & JS you don't have to use the Jira announcement banner to insert your code into Jira. Instead you can use the dedicated CSS and Javascript editors to insert your code into your Jira environment.
To add your own CSS or Javascript to Jira you'll need to navigate to the add-ons settings screen. When you're at the add-on screen select the "Configure" option under "Custom CSS & Javascript". This will open a new window where you can add your code to the Jira environment.
To customize the Jira Service Desk customer portal you'll need to switch to the 'Customer Portal' tab in the Global CSS & JS configuration screen. | https://docs.yiraphic.com/pages/viewpage.action?pageId=2883763 | 2022-05-16T22:43:07 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.yiraphic.com |
Creates a transaction and broadcasts it to the network.
transfer_destination object fields:
address — string; standard or integrated address of a recipient.
amount — unsigned int; amount of coins to be sent;
Integrated address usage
If you use multiple addresses in destinations field, make sure there are maximum 1 integrated address involved, or, if "payment id" parameter was specified, then integrated addresses are not allowed.
Outputs:
tx_hash — string; hash identifier of the transaction that was successfully sent.
tx_unsigned_hex — string; hex-encoded unsigned transaction (for watch-only wallets; to be used in cold-signing process). | https://docs.zano.org/reference/transfer-2 | 2022-05-16T20:46:06 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.zano.org |
DeleteStackInstances
Deletes stack instances for the specified accounts, in the specified AWS Regions.
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
- Accounts.member.N
[Self-managed permissions] The names of the AWS accounts that you want to delete stack instances for.
You can specify
Accountsor
DeploymentTargets, but not both.
Type: Array of strings
Pattern:
^[0-9]{12}$
Required: No
- CallAs
[Service-managed permissions] Specifies whether you are acting as an account administrator in the organization's management account or as a delegated administrator in a member account.
By default,
SELFis specified. Use
SELFfor stack sets with self-managed permissions.
If you are signed in to the management account, specify
SELF.
If you are signed in to a delegated administrator account, specify
DELEGATED_ADMIN.
Your AWS account must be registered as a delegated administrator in the management account. For more information, see Register a delegated administrator in the AWS CloudFormation User Guide.
Type: String
Valid Values:
SELF | DELEGATED_ADMIN
Required: No
- DeploymentTargets
[Service-managed permissions] The AWS Organizations accounts from which to delete stack instances.
You can specify
Accountsor
DeploymentTargets, but not both.
Type: DeploymentTargets object
Required: No
- OperationId
The unique identifier for this stack set operation..
Repeating this stack set operation with a new operation ID retries all stack instances whose status is
OUTDATED.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 128.
Pattern:
[a-zA-Z0-9][-a-zA-Z0-9]*
Required: No
- OperationPreferences
Preferences for how AWS CloudFormation performs this stack set operation.
Type: StackSetOperationPreferences object
Required: No
- Regions.member.N
The AWS Regions where you want to delete stack set instances.
Type: Array of strings
Pattern:
^[a-zA-Z0-9-]{1,128}$
Required: Yes
- RetainStacks
Removes the stack instances from the specified stack set, but doesn't delete the stacks. You can't reassociate a retained stack or add an existing, saved stack to a new stack set.
For more information, see Stack set operation options.
Type: Boolean
Required: Yes
- StackSetName
The name or unique ID of the stack set that you want to delete stack instances for.
Type: String
Required: Yes
Response Elements
The following element is returned by the service.
- OperationId
The unique identifier for this stack setIdAlreadyExists
The specified operation ID already exists.
HTTP Status Code: 409
- OperationInProgress
Another operation is currently in progress for this stack set. Only one operation can be performed for a stack set at a given time.
HTTP Status Code: 409
- StackSetNotFound
The specified stack set doesn't exist.
HTTP Status Code: 404
- StaleRequest
Another operation has been performed on this stack set since the specified operation was performed.
HTTP Status Code: 409
Examples
DeleteStackInstances
This example illustrates one usage of DeleteStackInstances.
Sample Request
?Action=DeleteStackInstances &Regions.member.1=us-east-1 &Regions.member.2=us-west-1 &Version=2010-05-15 &StackSetName=stack-set-example &RetainStacks=false &OperationPreferences.MaxConcurrentCount=2 &OperationPreferences.FailureToleranceCount=1 &Accounts.member.1=[account] &Accounts.member.2=[account] &OperationId=a0f49354-a1eb-42b7-9e5d-c0897example &X-Amz-Algorithm=AWS4-HMAC-SHA256 &X-Amz-Credential=[Access key ID and scope] &X-Amz-Date=20170810T233349Z &X-Amz-SignedHeaders=content-type;host &X-Amz-Signature=[Signature]
Sample Response
<DeleteStackInstancesResponse xmlns=" <DeleteStackInstancesResult> <OperationId>a0f49354-a1eb-42b7-9e5d-c08977e317a0</OperationId> </DeleteStackInstancesResult> <ResponseMetadata> <RequestId>0f3c3dcc-7945-11e7-a4ac-9503729bf9ee</RequestId> </ResponseMetadata> </DeleteStackInstancesResponse>
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_DeleteStackInstances.html | 2022-05-16T23:34:41 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.aws.amazon.com |
Enable sensitivity labels for Office files in SharePoint and OneDrive
Microsoft 365 licensing guidance for security & compliance.
Note
Microsoft 365 compliance is now called Microsoft Purview and the solutions within the compliance area have been rebranded. For more information about Microsoft Purview, see the blog announcement.
Enable built-in labeling for supported.
Enabling this feature also results in SharePoint and OneDrive being able to process the contents of Office files that have been encrypted by using a sensitivity label. The label can be applied in Office for the web, or in Office desktop apps and uploaded or saved in SharePoint and OneDrive. Until you enable this feature, these services can't process encrypted files, which means that coauthoring, eDiscovery, Microsoft Purview data loss prevention, search, and other collaborative features won't work for these files.
After you enable sensitivity labels for Office files in SharePoint and OneDrive, for new and changed files that have a sensitivity label that applies encryption with a cloud-based key (and doesn't use Double Key Encryption:
For Word, Excel, and PowerPoint files, SharePoint and OneDrive recognize the label and can now process the contents of the encrypted file.
When users download or access these files from SharePoint or OneDrive, the sensitivity label and any encryption settings from the label are enforced and remain with the file, wherever it is stored. Ensure you provide user guidance to use only labels to protect documents. For more information, see Information Rights Management (IRM) options and sensitivity labels.
When users upload labeled and encrypted files to SharePoint or OneDrive, they must have at least view rights to those files. For example, they can open the files outside SharePoint. If they don't have this minimum usage right, the upload is successful but the service doesn't recognize the label and can't process the file contents.
Use Office for the web (Word, Excel, PowerPoint) to open and edit Office files that have sensitivity labels that apply encryption. The permissions that were assigned with the encryption are enforced. You can also use auto-labeling for these documents.
External users can access documents that are labeled with encryption by using guest accounts. For more information, see Support for external users and labeled content.
Office 365 eDiscovery supports full-text search for these files and data loss prevention (DLP) policies support content in these files.
Note
If encryption has been applied with an on-premises key (a key management topology often referred to as "hold your own key" or HYOK), or by using Double Key Encryption, the service behavior for processing the file contents doesn't change. So for these files, coauthoring, eDiscovery, data loss prevention, search, and other collaborative features won't work.
The SharePoint and OneDrive behavior also doesn't change for existing files in these locations that are labeled with encryption using a single Azure-based key. For these files to benefit from the new capabilities after you enable sensitivity labels for Office files in SharePoint and OneDrive, the files must be either downloaded and uploaded again, or edited.
After you enable sensitivity labels for Office files in SharePoint and OneDrive, three new audit events are available for monitoring sensitivity labels that are applied to documents in SharePoint and OneDrive:
- Applied sensitivity label to file
- Changed sensitivity label applied to file
- Removed sensitivity label from file
Watch the following video (no audio) to see the new capabilities in action:
You always have the choice to disable sensitivity labels for Office files in SharePoint and OneDrive (opt-out) at any time.
If you are currently protecting documents in SharePoint by using SharePoint Information Rights Management (IRM), be sure to check the SharePoint Information Rights Management (IRM) and sensitivity labels section on this page.
Requirements
These new capabilities work with sensitivity labels only. If you currently have Azure Information Protection labels, first migrate them to sensitivity labels so that you can enable these features for new files that you upload. For instructions, see How to migrate Azure Information Protection labels to unified sensitivity labels.
Use the OneDrive sync app version 19.002.0121.0008 or later on Windows, and version 19.002.0107.0008 or later on Mac. Both these versions were released January 28, 2019, and are currently released to all rings. For more information, see the OneDrive release notes. After you enable sensitivity labels for Office files in SharePoint and OneDrive, users who run an older version of the sync app are prompted to update it.
Limitations
SharePoint and OneDrive can't process some files that are labeled and encrypted from Office desktop apps when these files contain PowerQuery data, data stored by custom add-ins, or custom XML parts such as Cover Page Properties, content type schemas, custom Document Information Panel, and Custom XSN. This limitation also applies to files that include a bibliography, and to files that have a Document ID added when they are uploaded.
For these files, either apply a label without encryption so that they can later be opened in Office on the web, or instruct users to open the files in their desktop apps. Files that are labeled and encrypted only in Office on the web aren't affected.
SharePoint and OneDrive don't automatically apply sensitivity labels to existing files that you've already encrypted using Azure Information Protection labels. Instead, for the features to work after you enable sensitivity labels for Office files in SharePoint and OneDrive, complete these tasks:
- Make sure you have migrated the Azure Information Protection labels to sensitivity labels and published them from the Microsoft Purview compliance portal.
- Download the labeled files and then upload them to their original location in SharePoint or OneDrive.
SharePoint and OneDrive can't process encrypted files when the label that applied the encryption has any of the following configurations for encryption:
Let users assign permissions when they apply the label and the checkbox In Word, PowerPoint, and Excel, prompt users to specify permissions is selected. This setting is sometimes referred to as "user-defined permissions".
User access to content expires is set to a value other than Never.
Double Key Encryption is selected.
For labels with any of these encryption configurations, the labels aren't displayed to users in Office for the web. Additionally, the new capabilities can't be used with labeled documents that already have these encryption settings. For example, these documents won't be returned in search results, even if they are updated.
For performance reasons, when you upload or save a document to SharePoint and the file's label doesn't apply encryption, the Sensitivity column in the document library can take a while to display the label name. Factor in this delay if you use scripts or automation that depend on the label name in this column.
If a document is labeled while it's checked out in SharePoint, the Sensitivity column in the document library won't display the label name until the document is checked in and next opened in SharePoint.
If a labeled and encrypted document is downloaded from SharePoint or OneDrive by an app or service that uses a service principal name, and then uploaded again with a label that applies different encryption settings, the upload will fail. An example scenario is Microsoft Defender for Cloud Apps changes a sensitivity label on a file from Confidential to Highly Confidential, or from Confidential to General.
The upload doesn't fail if the app or service first runs the Unlock-SPOSensitivityLabelEncryptedFile cmdlet, as explained in the Remove encryption for a labeled document section. Or, before the upload, the original file is deleted, or the file name is changed.
Users might experience delays in being able to open encrypted documents in the following Save As scenario: Using a desktop version of Office, a user chooses Save As for a document that has a sensitivity label that applies encryption. The user selects SharePoint or OneDrive for the location, and then immediately tries to open that document in Office for the web. If the service is still processing the encryption, the user sees a message that the document must be opened in their desktop app. If they try again in a couple of minutes, the document successfully opens in Office for the web.
For encrypted documents, printing is not supported in Office for the web.
For encrypted documents in Office for the web, copying to the clipboard and screen captures are not prevented. For more information, see Can Rights Management prevent screen captures?
By default, Office desktop apps and mobile apps don't support co-authoring for files that are labeled with encryption. These apps continue to open labeled and encrypted files in exclusive editing mode.
Note
Co-authoring is now supported for Windows and macOS. For more information, see Enable co-authoring for files encrypted with sensitivity labels.
If an admin changes settings for a published label that's already applied to files downloaded to users' sync client, users might be unable to save changes they make to the file in their OneDrive Sync folder. This scenario applies to files that are labeled with encryption, and also when the label change is from a label that didn't apply encryption to a label that does apply encryption. Users see a red circle with a white cross icon error, and they are asked to save new changes as a separate copy. Instead, they can close and reopen the file, or use Office for the web.
Users can experience save problems after going offline or into a sleep mode when instead of using Office for the web, they use the desktop and mobile apps for Word, Excel, or PowerPoint. For these users, when they resume their Office app session and try to save changes, they see an upload failure message with an option to save a copy instead of saving the original file.
Documents that have been encrypted in the following ways can't be opened in Office for the web:
- Encryption that uses an on-premises key ("hold your own key" or HYOK)
- Encryption that was applied by using Double Key Encryption
- Encryption that was applied independently from a label, for example, by directly applying a Rights Management protection template.
Labels configured for other languages are not supported and display the original language only.
If you delete a label that's been applied to a document in SharePoint or OneDrive, rather than remove the label from the applicable label policy, the document when downloaded won't be labeled or encrypted. In comparison, if the labeled document is stored outside SharePoint or OneDrive, the document remains encrypted if the label is deleted. Note that although you might delete labels during a testing phase, it's very rare to delete a label in a production environment.
How to enable sensitivity labels for SharePoint and OneDrive (opt-in)
You can enable the new capabilities by using the Microsoft Purview compliance portal, or by using PowerShell. As with all tenant-level configuration changes for SharePoint and OneDrive, it takes about 15 minutes for the change to take effect.
Use the Microsoft Purview compliance portal to enable support for sensitivity labels
This option is the easiest way to enable sensitivity labels for SharePoint and OneDrive, but you must sign in as a global administrator for your tenant.
Sign in to the Microsoft Purview compliance portal as a global administrator, and navigate to Solutions > Information protection
If you don't immediately see this option, first select Show all.
If you see a message to turn on the ability to process content in Office online files, select Turn on now:
The command runs immediately and when the page is next refreshed, you no longer see the message or button.
Note
If you have Microsoft 365 Multi-Geo, you must use PowerShell to enable these capabilities for all your geo-locations. See the next section for details.
Use PowerShell to enable support for sensitivity labels
As an alternative to using the Microsoft Purview compliance portal, you can enable support for sensitivity labels by using the Set-SPOTenant cmdlet from SharePoint Online PowerShell.
If you have Microsoft 365 Multi-Geo, you must use PowerShell to enable this support for all your geo-locations.
Prepare the SharePoint Online Management Shell
Before you run the PowerShell command to enable sensitivity labels for Office files in SharePoint and OneDrive, ensure that you're running SharePoint Online Management Shell version 16.0.19418.12000 or later. If you already have the latest version, you can skip to next procedure to run the PowerShell command.
If you have installed a previous version of the SharePoint Online Management Shell from PowerShell gallery, you can update the module by running the following cmdlet.
Update-Module -Name Microsoft.Online.SharePoint.PowerShell
Alternatively, if you have installed a previous version of the SharePoint Online Management Shell from the Microsoft Download Center, you can also go to Add or remove programs and uninstall the SharePoint Online Management Shell.
In a web browser, go to the Download Center page and Download the latest SharePoint Online Management Shell.
Select your language and then click Download.
Choose between the x64 and x86 .msi file. Download the x64 file if you run the 64-bit version of Windows or the x86 file if you run the 32-bit version. If you don’t know, see Which version of Windows operating system am I running?
After you have downloaded the file, run the file and follow the steps in the Setup Wizard.
Run the PowerShell command to enable support for sensitivity labels
To enable the new capabilities, use the Set-SPOTenant cmdlet with the EnableAIPIntegration parameter:
Using a work or school account that has global administrator or SharePoint admin privileges in Microsoft 365, connect to SharePoint. To learn how, see Getting started with SharePoint Online Management Shell.
Note
If you have Microsoft 365 Multi-Geo, use the -Url parameter with Connect-SPOService, and specify the SharePoint Online Administration Center site URL for one of your geo-locations.
Run the following command and press Y to confirm:
Set-SPOTenant -EnableAIPIntegration $true
For Microsoft 365 Multi-Geo: Repeat steps 1 and 2 for each of your remaining geo-locations.
Publishing and changing sensitivity labels
When you use sensitivity labels with SharePoint and OneDrive, keep in mind that you need to allow for replication time when you publish new sensitivity labels or update existing sensitivity labels. This is especially important for new labels that apply encryption.
For example: You create and publish a new sensitivity label that applies encryption and it very quickly appears in a user's desktop app. The user applies this label to a document and then uploads it to SharePoint or OneDrive. If the label replication hasn't completed for the service, the new capabilities won't be applied to that document on upload. As a result, the document won't be returned in search or for eDiscovery and the document can't be opened in Office for the web.
For more information about the timing of labels, see When to expect new labels and changes to take effect.
As a safeguard, we recommend publishing new labels to just a few test users first, wait for at least one hour, and then verify the label behavior on SharePoint and OneDrive. Wait at least a day before making the label available to more users by either adding more users to the existing label policy, or adding the label to an existing label policy for your standard users. By the time your standard users see the label, it has already synchronized to SharePoint and OneDrive.
SharePoint Information Rights Management (IRM) and sensitivity labels
SharePoint Information Rights Management (IRM) is an older technology to protect files at the list and library level by applying encryption and restrictions when files are downloaded. This older protection technology is designed to prevent unauthorized users from opening the file while it's outside SharePoint.
In comparison, sensitivity labels provide the protection settings of visual markings (headers, footers, watermarks) in addition to encryption. The encryption settings support the full range of usage rights to restrict what users can do with the content, and the same sensitivity labels are supported for many scenarios. Using the same protection method with consistent settings across workloads and apps results in a consistent protection strategy.
However, you can use both protection solutions together and the behavior is as follows:
If you upload a file with a sensitivity label that applies encryption, SharePoint can't process the content of these files so coauthoring, eDiscovery, DLP, and search are not supported for these files.
If you label a file using Office for the web, any encryption settings from the label are enforced. For these files, coauthoring, eDiscovery, DLP, and search are supported.
If you download a file that's labeled by using Office for the web, the label is retained and any encryption settings from the label are enforced rather than the IRM restriction settings.
If you download an Office or PDF file that isn't encrypted with a sensitivity label, IRM settings are applied.
If you have enabled any of the additional IRM library settings, which include preventing users from uploading documents that don't support IRM, these settings are enforced.
With this behavior, you can be assured that all Office and PDF files are protected from unauthorized access if they are downloaded, even if they aren't labeled. However, labeled files that are uploaded won't benefit from the new capabilities.
Use the managed property InformationProtectionLabelId to find all documents in SharePoint or OneDrive that have a specific sensitivity label. Use the following syntax:
InformationProtectionLabelId:<GUID>
For example, to search for all documents that have been labeled as "Confidential", and that label has a GUID of "8faca7b8-8d20-48a3-8ea2-0f96310a848e", in the search box, type:
InformationProtectionLabelId:8faca7b8-8d20-48a3-8ea2-0f96310a848e
Search won't find labeled documents in a compressed file, such as a .zip file.
To get the GUIDs for your sensitivity labels, use the Get-Label cmdlet:
First, connect to Office 365 Security & Compliance Center PowerShell.
For example, in a PowerShell session that you run as administrator, sign in with a global administrator account.
Then run the following command:
Get-Label |ft Name, Guid
For more information about using managed properties, see Manage the search schema in SharePoint.
Remove encryption for a labeled document
There might be rare occasions when a SharePoint administrator needs to remove encryption from a document stored in SharePoint. Any user who has the Rights Management usage right of Export or Full Control assigned to them for that document can remove encryption that was applied by the Azure Rights Management service from Azure Information Protection. For example, users with either of these usage rights can replace a label that applies encryption with a label without encryption. A super user could also download the file and save a local copy without the encryption.
As an alternative, a global admin or SharePoint admin can run the Unlock-SPOSensitivityLabelEncryptedFile cmdlet, which removes both the sensitivity label and the encryption. This cmdlet runs even if the admin doesn't have access permissions to the site or file, or if the Azure Rights Management service is unavailable.
For example:
Unlock-SPOSensitivityLabelEncryptedFile -FileUrl " Documents/Doc1.docx" -JustificationText "Need to decrypt this file"
Requirements:
SharePoint Online Management Shell version 16.0.20616.12000 or later.
The encryption has been applied by a sensitivity label with admin-defined encryption settings (the Assign permissions now label settings). Double Key Encryption is not supported for this cmdlet.
The justification text is added to the audit event of Removed sensitivity label from file, and the decryption action is also recorded in the protection usage logging for Azure Information Protection.
How to disable sensitivity labels for SharePoint and OneDrive (opt-out)
If you disable these new capabilities, files that you uploaded after you enabled sensitivity labels for SharePoint and OneDrive continue to be protected by the label because the label settings continue to be enforced. When you apply sensitivity labels to new files after you disable these new capabilities, full-text search, eDiscovery, and coauthoring will no longer work.
To disable these new capabilities, you must use PowerShell. Using the SharePoint Online Management Shell and the Set-SPOTenant cmdlet, specify the same EnableAIPIntegration parameter as described in the Use PowerShell to enable support for sensitivity labels section. But this time, set the parameter value to false and press Y to confirm:
Set-SPOTenant -EnableAIPIntegration $false
If you have Microsoft 365 Multi-Geo, you must run this command for each of your geo-locations.
Next steps
After you've enabled sensitivity labels for Office files in SharePoint and OneDrive, consider automatically labeling these files by using auto-labeling policies. For more information, see Apply a sensitivity label to content automatically.
Need to share your labeled and encrypted documents with people outside your organization? See Sharing encrypted documents with external users.
Feedback
Trimiteți și vizualizați feedback pentru | https://docs.microsoft.com/ro-RO/microsoft-365/compliance/sensitivity-labels-sharepoint-onedrive-files?view=o365-worldwide | 2022-05-16T23:17:11 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.microsoft.com |
Links may not function; however, this content may be relevant to outdated versions of the product.
Adding a database column by using the Data Type explorer in Dev Studio
Add a database column to your internal database by editing the columns in the Data Type Records tab.The Data Type explorer can add additional columns to properties that are associated with a data type in their respective table in the Pega Platform database. By updating the Pega Platform database through Data Types, you can use the same functionality as by updating your database directly. Updating your database through data types gives Pega Cloud clients another option to manage their database.
In the navigation pane of Dev Studio, click Data types.
Select the data type to which you want to add a column.
Click the Records tab.
In the Source section, in the Actions list, select Edit Columns.
In the Edit Columns window, click the Add a row icon.
Enter the parameters of the column that you want to add to the database.
Click Submit.
Previous topic Managing the database by using data types Next topic Changing database column length by using the Integration Designer | https://docs.pega.com/data-management-and-integration/84/adding-database-column-using-data-type-explorer-dev-studio | 2022-05-16T23:23:53 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.pega.com |
Scripting¶
XML Scripting¶
Note
XML scripting via gvm-cli should only be considered for simpler use cases. Greenbone Management Protocol (GMP) or Open Scanner Protocol (OSP) scripts are often more powerful and easier to write.
Scripting via gvm-cli is directly based on GMP and OSP. Both protocols make use of XML command requests and corresponding responses.
A typical example for using GMP is the automatic scan of a new system. In the example below, it is assumed that an Intrusion Detection System (IDS) that monitors the systems in the Demilitarized Zone (DMZ) and immediately discovers new systems and unusual, new TCP ports is in use. If such an event is being discovered, the IDS should automatically initiate a scan of the new system. This can be done with the help of a script.
Starting point is the IP address of the new suspected system. For this IP address, a target needs to be created on the GSM.
If the IP address is saved in the environment variable
IPADDRESSby the IDS, the respective target can be created:
> gvm-cli socket --xml "<create_target><name>Suspect Host</name><hosts>"$IPADDRESS"</hosts></create_target>" <create_target_response status="201" status_text="OK, resource created" id="e5adc10c-71d0-49fe-aacf-a442ee31d387"/>
See create_target command for all details.
- Create a task using the default Full and Fast scan configuration with UUID
daba56c8-73ec-11df-a475-002264764ceaand the previously generated target:
> gvm-cli socket --xml "<create_task><name>Scan Suspect Host</name><target id=\"e5adc10c-71d0-49fe-aacf-a442ee31d387\"/><config id=\"daba56c8-73ec-11df-a475-002264764cea\"/><scanner id=\"08b69003-5fc2-4037-a479-93b440211c73\"/></create_task>" <create_task_response status="201" status_text="OK, resource created" id="7249a07c-03e1-4197-99e4-a3a9ab5b7c3b"/>
See create_task command for all details.
- Start the task using the UUID return from the last response:
> gvm-cli socket --xml "<start_task task_id=\"7249a07c-03e1-4197-99e4-a3a9ab5b7c3b\"/>" <start_task_response status="202" status_text="OK, request submitted"><report_id>0f9ea6ca-abf5-4139-a772-cb68937cdfbb</report_id></start_task_response>
See start_task command for all details.
→ The task is running. The response returns the UUID of the report which will contain the results of the scan.
- Display the current status of the task:
> gvm-cli socket --xml "<get_tasks task_id=\"7249a07c-03e1-4197-99e4-a3a9ab5b7c3b\"/>" <get_tasks_response status="200" status_text="OK"> ... <status>Running</status><progress>98 ... </progress> ... <get_tasks_response/>
See get_tasks command for all details.
→ As soon as the scan is completed, the full report is available and can be displayed.
- Display the full report:
> gvm-cli socket --xml "<get_reports report_id=\"0f9ea6ca-abf5-4139-a772-cb68937cdfbb\"/>" <get_reports_response status="200" status_text="OK"><report type="scan" id="0f9ea6ca-abf5-4139-a772-cb68937cdfbb" format_id="a994b278-1f62-11e1-96ac-406186ea4fc5" extension="xml" content_type="text/xml"> ... </get_reports_response>
See get_reports command for all details.
Additionally, the report can be downloaded in a specific report format instead of plain XML.
List all report formats:
> gvm-cli socket --xml "<get_report_formats/>" <get_report_formats_response status="200" status_text="OK"><report_format id="5057e5cc-b825-11e4-9d0e-28d24461215b"> ... </get_report_formats_response>
See get_report_formats command for all details.
Download the report in the desired format.
Example: download the report as a PDF file:
> gvm-cli socket --xml "<get_reports report_id=\"0f9ea6ca-abf5-4139-a772-cb68937cdfbb\" format_id=\"c402cc3e-b531-11e1-9163-406186ea4fc5\"/>"
Note
Please be aware that the PDF is returned as base64 encoded content of the <get_report_response><report> element in the XML response.
GVM Scripts¶
Changed in version 2.0.
Scripting of Greenbone Management Protocol (GMP) and Open Scanner Protocol (OSP) via gvm-script or interactively via gvm-pyshell is based on the python-gvm library. Please take a look at python-gvm for further details about the API.
Note
By convention, scripts using GMP are called GMP scripts and
are files with the ending
.gmp.py. Accordingly, OSP scripts with the
ending
.osp.py are using OSP. Technically both protocols could be
used in one single script file.
The following sections are using the same example as it was used in XML Scripting where it was assumed that an Intrusion Detection System (IDS) that monitors the systems in the Demilitarized Zone (DMZ) and immediately discovers new systems and unusual, new TCP ports is in use. The IDS will provide the IP address of a new system to the GMP script.
- Define the function that should be called when the script is started by adding the following code to a file named
scan-new-system.gmp.py:
if __name__ == '__gmp__': main(gmp, args)
→ The script is only called when being run as a GMP script. The gmp and args variables are provided by gvm-cli or gvm-pyshell. args contains arguments for the script, e.g., the user name and password for the GMP connection. The most important aspect about the example script is that it contains the argv property with the list of additional script specific arguments. The gmp variable contains a connected and authenticated instance of a Greenbone Management Protocol class.
- The main function begins with the following code lines:
def main(gmp: Gmp, args: Namespace) -> None: # check if IP address is provided to the script # argv[0] contains the script name if len(args.argv) <= 1: print('Missing IP address argument') return 1 ipaddress = args.argv[1]
→ The main function stores the first argument passed to the script as the
ipaddress
variable.
3. Add the logic to create a target, create a new scan task for the target, start the task and print the corresponding report ID:
ipaddress = args.argv[1] target_id = create_target(gmp, ipaddress) full_and_fast_scan_config_id = 'daba56c8-73ec-11df-a475-002264764cea' openvas_scanner_id = '08b69003-5fc2-4037-a479-93b440211c73' task_id = create_task( gmp, ipaddress, target_id, full_and_fast_scan_config_id, openvas_scanner_id, ) report_id = start_task(gmp, task_id) print( f"Started scan of host {ipaddress}. Corresponding report ID is {report_id}" )
For creating the target from an IP address (DNS name is also possible), the following is used. Since target names must be unique, the current date and time in ISO 8601 format (YYYY-MM-DDTHH:MM:SS.mmmmmm) is added:
def create_target(gmp, ipaddress): import datetime # create a unique name by adding the current datetime name = f"Suspect Host {ipaddress} {str(datetime.datetime.now())}" response = gmp.create_target(name=name, hosts=[ipaddress]) return response.get('id')
The function for creating the task is defined as:
def create_task(gmp, ipaddress, target_id, scan_config_id, scanner_id): name = f"Scan Suspect Host {ipaddress}" response = gmp.create_task( name=name, config_id=scan_config_id, target_id=target_id, scanner_id=scanner_id, ) return response.get('id')
Finally, the function to start the task and get the report ID:
def start_task(gmp, task_id): response = gmp.start_task(task_id) # the response is # <start_task_response><report_id>id</report_id></start_task_response> return response[0].text
For getting a PDF document of the report, a second script
pdf-report.gmp.py
can be used:
from base64 import b64decode from pathlib import Path def main(gmp: Gmp, args: Namespace) -> None: # check if report id and PDF filename are provided to the script # argv[0] contains the script name if len(args.argv) <= 2: print('Please provide report ID and PDF file name as script arguments') return 1 report_id = args.argv[1] pdf_filename = args.argv[2] pdf_report_format_id = "c402cc3e-b531-11e1-9163-406186ea4fc5" response = gmp.get_report( report_id=report_id, report_format_id=pdf_report_format_id ) report_element = response[0] # get the full content of the report element content = "".join(report_element.itertext()) # convert content to 8-bit ASCII bytes binary_base64_encoded_pdf = content.encode('ascii') # decode base64 binary_pdf = b64decode(binary_base64_encoded_pdf) # write to file and support ~ in filename path pdf_path = Path(pdf_filename).expanduser() pdf_path.write_bytes(binary_pdf) print('Done.') if __name__ == '__gmp__': main(gmp, args) | https://gvm-tools.readthedocs.io/en/latest/scripting.html | 2022-05-16T21:19:17 | CC-MAIN-2022-21 | 1652662512249.16 | [] | gvm-tools.readthedocs.io |
fMRI Tutorial #2: Overview of The Flanker Task¶
The dataset you downloaded uses the Flanker task, which is designed to tap into a mental process known as cognitive control. For this course, we’re going to define cognitive control as the ability to ignore irrelevant stimuli in order to do the. | https://andysbrainbook.readthedocs.io/en/latest/fMRI_Short_Course/fMRI_02_ExperimentalDesign.html | 2022-05-16T22:43:45 | CC-MAIN-2022-21 | 1652662512249.16 | [] | andysbrainbook.readthedocs.io |
This page is about the package process and the types of packages you can create and deploy in Appian. Creating packages is a part of the deployment process, and should be taken into consideration while you are building applications to ensure that your changes will be deployed successfully.
For an overview about the deployment process, see Deploy Applications.
For more instructions on how to deploy your packages, see Deploy Packages to Target Environments.
A package is a collection of Appian application changes that a developer can deploy to another environment. Preparing a package is an important step in the deployment process and involves understanding what changes you need to deploy and how these changes will affect your target environment.
There are three different types of packages that you can deploy in Appian. In most cases, your packages will contain application objects, but can also include environment-specific information, such as import customization files or database scripts.
Applications contain a set of objects that make up a business solution. Applications should be used to introduce a new set of objects that do not exist in the target environment.
Patches contain new or updated objects, which a developer deploys when introducing an update to an existing application in the target environment. Patches are helpful for deploying bug fixes or enhancements.
Administration Console Settings contain updates to your Administration settings, such as site branding or third-party credentials in the target environment.
Your package may include dependencies that are required to successfully deploy updates to your application. Import customization files and database scripts are common dependencies that can be packaged with applications or patches. Your package may also rely on plug-ins or other dependencies that must be deployed before you start a package deployment. Depending on the type of changes you are making to the target environment, your list of dependencies and how you deploy them will vary.
There are two ways you can create packages in Appian: using compare and deploy to automatically identify changed objects or assembling your changes manually. To make your deployment process more efficient, compare and deploy is the recommended approach. Compare and deploy allows you to compare objects across environments, bundle database scripts and import customization files with your package, and inspect your objects before directly deploying your package to the target environment.
You must add environments to your infrastructure to use compare and deploy.
When deploying changes to a higher environment, such as a test environment, your packaging process should generally follow these steps:
Make sure that all necessary objects are in your application by running missing precedents. Both methods of creating packages start with the objects in an application. It is also a best practice to review the security summary and make sure all your objects have the appropriate security set before beginning the packaging process.
Comparing objects across environments allows you to review changes as you're packaging them and confirm that you are deploying the correct changes to the target environment. Comparing objects during package preparation is only available if you have added environments to your infrastructure.
If you manually create and export your package, you will have an opportunity to compare objects by inspecting your package upon manual import in the target environment.
Object comparison statuses are available when you compare objects across environments or inspect packages before import. These statuses are helpful for understanding how your objects differ between a source and target environment, as well as how they will be handled during deployment. You should use the statuses to confirm that the correct objects are in your package and detect any conflicts before continuing with a deployment.
If you need to force an import for objects with a Not Changed status, add the
FORCE_UPDATE import-specific setting to your import customization file. See Managing Import Customization Files for more information on when and how to force an update.
Conflicts may occur when an object is modified in the target environment and modified separately in the source environment. Objects with the status of Conflict Detected will be displayed when comparing objects across environments or when inspecting a package before manual import. This status is helpful when deploying between multiple development environments. When you see this status, you should pause your deployment and proceed carefully since there is a chance of overwriting someone's work.
When you encounter unexpected conflicts you can further investigate as follows:
After creating your package, it is a best practice to inspect the objects before deployment. This ensures that no unexpected changes have been made to the target environment and detects potential conflicts or import errors early.
Compare and deploy automatically inspects your package after you finish preparing it. If you do not have environments added to your infrastructure, you can also inspect your package before import in the target environment.
The inspection results summarize which objects in your package have problems or warnings, how many will be created or updated by import, and how many will be skipped. When run in the target environment on a manually exported package, this summary also includes detailed information about the status of each object in your package and when they were last modified in the target environment.
Inspect only applies to Appian objects and the import customization file. Database scripts and plug-ins should be reviewed separately.
During deployment, the definition of each object in your package is exported from the source environment into an XML file. Appian bundles these XML files into a zip file, which is used to move the objects to the target environment.
Deploying a package is the last step in your packaging process. With compare and deploy, you can let Appian send your package to the target environment, as long as the target is configured to allow direct deployments. Alternately, you can download the package zip file and deploy it yourself by logging in to the target environment and manually importing it.
Information about related objects is preserved across the deployments. In most cases, the related objects themselves are not automatically exported. They must be explicitly added as objects within the package, or already present in the target environment. The application view allows you to check your application for missing precedents and add them to the application. You can track deployment progress in the deployments view.
For instructions on how to deploy packages, see this page.
For more information about object-specific rules that apply to deployments, see this page.
On This Page | https://docs.appian.com/suite/help/20.2/prepare-deployment-packages.html | 2022-05-16T21:48:29 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.appian.com |
Glossary
Workspace
Your (git) repository with one or more Contember projects.
Project
Every project contains Contember Schema definition for your simple website, blog or any other content-based platform or database. Optionally any project can have its Contember Admin.
Instance
A running Contember Engine server hosting as many Contember projects as you like (and providing their Content API). Each instance has a single Tenant API, so you can store and manage access from a single point.
Content API
The main GraphQL API for your project. It is automatically generated from your schema definition
System API
Is a complementary API for your project. Used to manage schema migrations.
Tenant API
Using this API you can manage users, API keys and project memberships on an instance
Project Schema
Definition of your model, ACL rules and input validation rules.
Project Schema Migrations
Chronologically sorted immutable JSON files containing all schema changes. These files are "source of true" of a schema.
Event
Each operation you make in your data is stored in an event log. This log can be used for history.
Superadmin
Is a special user role. This user is the most powerful user of a system. | https://docs.contember.com/intro/glossary/ | 2022-05-16T22:52:25 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.contember.com |
XafApplication.Modules Property
Provides access to the module list used by the application.
Namespace: DevExpress.ExpressApp
Assembly: DevExpress.ExpressApp.v21.2.dll
Declaration
Property Value
Remarks
Use this property to add an extra module to the application. Alternatively you can use one of the approaches listed in the Ways to Register a Module topic.
Related GitHub Examples
The following code snippets (auto-collected from DevExpress Examples) contain references to the Modules | https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.XafApplication.Modules?p=netstandard | 2022-05-16T22:43:11 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.devexpress.com |
Function Item#
The Function Item node is used to add custom snippets to JavaScript code that should be executed once for every item that it receives as the input.
Keep in mind
Please note that the Function Item node is different from the Function node. Check out this page to learn about the difference between the two.
The Function Item node supports promises. So instead of returning the items directly, it is also possible to return a promise which resolves accordingly.
It also provides the ability to write to your browser console using
console.log, useful for debugging and troubleshooting your workflows.
Node Reference#
You can also use the methods and variables mentioned in the Expressions page in the Function Item node.
Variable: item#
It contains the "json" data of the currently processed item.
The data can be accessed and manipulated like this:
Method: getBinaryData()#
Returns all the binary data (all keys) of the item which gets currently processed.
Method: setBinaryData(binaryData)#
Sets all the binary data (all keys) of the item which gets currently processed..
Note: The static data cannot be read and written when executing via manual executions. The data will always be empty, and the changes will not persist. The static data will only be saved when a workflow is active.
Example#
External libraries#
You can import and use built-in and external npm modules in the Function Item node. To learn how to enable external moduels, refer the Configuration guide. | https://docs.n8n.io/integrations/core-nodes/n8n-nodes-base.functionitem/ | 2022-05-16T21:31:44 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.n8n.io |
Choose your operating system:
Windows
macOS
Linux
Oodle Texture provides fast, high quality encoding of textures to the various BCn/DXTn formats. Once Oodle Texture is configured it will operate automatically in the background. You can set Oodle Texture globally, and then define it more specifically for LOD groups and individual textures.
Oodle Texture does not encode ASTC or other mobile formats.
Enabling Oodle Texture
The plugin for Oodle Texture is enabled by default in Unreal Engine.
In addition to the plugin, Oodle Texture requires a setting in the
DefaultEngine.ini file.
\Engine\Config\DefaultEngine.ini [AlternateTextureCompression] TextureCompressionFormat="TextureFormatOodle" TextureFormatPrefix="OODLE_" bEnableInEditor=True
As Oodle Texture is enabled by default in 4.27, these lines should already be present in your
BaseEngine.ini file.
We strongly recommend leaving
bEnableInEditor=true to maintain consistent behavior between the editor and packaged builds. If set to
false, artists reviewing encoding results in the Editor will see different results than the cook system produces.
You can verify Oodle Texture is enabled by examining the log:
LogTextureFormatOodle: Display: Oodle Texture 2.9.0 init RDO On with DefaultRDOLambda=30
When Oodle is used for a given texture, the format will contain the prefix
OODLE_:
LogTexture: Display: Building textures: test (OODLE_AutoDXT, 256X256)
Key Concepts for Oodle Texture
There are two concepts you need to understand in order to make use of Oodle Texture: RDO (Rate Distortion Optimization) and Lambda.
Understanding RDO
RDO is a term that refers to trading quality (distortion) for size (rate). For texture encoding, this sounds odd -- DXTn/BCn textures do not vary in size with quality, they are a fixed size based on the format, resolution, and mip count.
Oodle Texture optionally expose a way to manage the resulting encoded texture data so that when a uasset containing a texture is compressed for distribution via the IOStore / .pak file system, it compresses smaller. Thus, RDO in Oodle Texture only reduces distribution sizes.
Additionally, it is tuned to work with the Kraken compression format. Refer to Oodle Data for more information.
Understanding Lambda
The parameter that determines how much distortion is introduced, and thus how much smaller the resulting file is, is referred to as lambda.
Lambda can be set between 0 and 100, with lower numbers representing lower distortion, and therefore higher quality results. A lambda value of around 30 still produces high quality results. A lambda value of 0 disables RDO entirely, resulting in the theoretical best quality. However, even when seeking best quality, we recommend using a lambda value of 1 as the cost / benefit ratio is still very good, resulting in very little distortion for reasonable distribution size gains.
Generally speaking, we expect lambda to be set globally and not often overridden. Determining a value appropriate for your project will be a collaborative effort based on distribution size needs. It is best to have your global lambda be your highest value (lowest quality) one, and selectively set higher quality / lower lambda on LOD groups or specific textures, as needed.
Textures other than diffuse / albedo maps will likely require a lower lambda (usually 5-20), especially normal maps, as distortion that is not visible to the naked eye can be more noticeable with textures like specular highlights.
Configuring Oodle Texture
Oodle Texture is primarily configured using the
DefaultEngine.ini file, but also exposes lambda on texture LOD groups and on a per-texture basis.
Global Configuration
The
TextureFormatOodle section in the
DefaultEngine.ini file contains the global settings for Oodle Texture.
\Engine\Config\DefaultEngine.ini [TextureFormatOodle] DefaultRDOLambda=30 GlobalLambdaMultiplier=1.0 bForceAllBC23ToBC7=False bForceRDOOff=False bDebugColor=False
Configuring Texture LOD Groups
The LOD group parameter representing RDO lambda is called Lossy Compression Amount. This parameter is defined for LOD groups in the
DefaultDeviceProfiles.ini file.
TextureLODGroups=(Group=TEXTUREGROUP_WorldNormalMap,MinLODSize=1,MaxLODSize=8192,LODBias=0,MinMagFilter=aniso,MipFilter=point,MipGenSettings=TMGS_SimpleAverage,**LossyCompressionAmount=TLCA_Low**)
Lossy Compression Amount can take the following values:
Configuring Individual Textures
RDO Lambda can also be set for individual textures using the Lossy Compression Amount parameter, and takes the same values shown above.
To set the parameter for a single texture:
Double-click the texture you want to set RDO Lambda for in the Content Browser to open it in the Texture Editor window.
In the Details panel, expand the Compression section and click the arrow icon to show the Advanced options.
Use the dropdown menu next to the Lossy Compression Amount parameter to select the desired value. | https://docs.unrealengine.com/4.27/en-US/TestingAndOptimization/Oodle/Texture/ | 2022-05-16T21:38:09 | CC-MAIN-2022-21 | 1652662512249.16 | [] | docs.unrealengine.com |
Update Promotional Messsaging Attributes
Overview
Affirm promotional messaging components—monthly payment messaging and educational modals—show customers how they can use Affirm to finance their purchases. You may have implemented promotional messaging with an incorrect or invalid data-promo-id value. This displays incorrect terms in Affirm modal and messaging. This guide will review Affirm promotional messaging and provide instructions on how to update the data-attributes.
Promotional Messaging<<
Instructions
Upgrade steps are:
- Make sure all Promotional Messaging is now updated to use data-page-type
- Reach out to Affirm staff to add any existing customizations to the messaging
--promod-id="XXXXXXXXXXXXXX"...>
<p class="affirm-site-modal" data-promod-id="XXXXXXXXXXXXXX"...>
You will need to remove the data-promo-id attribute from the above HTML elements on your site and replace it with data-page-type attributes..
Before setting these changes live, you’ll need to work with your Client Success Manager to ensure all the existing customizations are moved over to use data-page-type attributes.. | https://docs.affirm.com/Integrate_Affirm/Promotional_Messaging/2019_Q1_Upgrade/Update_Promotional_Messsaging_Attributes | 2019-08-17T14:08:25 | CC-MAIN-2019-35 | 1566027313259.30 | [array(['https://docs.affirm.com/@api/deki/files/704/Screen_Shot_2018-12-04_at_2.02.56_PM.png?revision=1&size=bestfit&width=459&height=73',
'Screen Shot 2018-12-04 at 2.02.56 PM.png'], dtype=object)
array(['https://docs.affirm.com/@api/deki/files/705/Screen_Shot_2018-12-04_at_2.03.34_PM.png?revision=1&size=bestfit&width=480&height=75',
'Screen Shot 2018-12-04 at 2.03.34 PM.png'], dtype=object)
array(['https://docs.affirm.com/@api/deki/files/703/Screen_Shot_2018-12-04_at_2.01.57_PM.png?revision=1&size=bestfit&width=598&height=76',
'Screen Shot 2018-12-04 at 2.01.57 PM.png'], dtype=object)
array(['https://docs.affirm.com/@api/deki/files/710/modal-image.png?revision=1&size=bestfit&width=656&height=470',
'modal-image.png'], dtype=object) ] | docs.affirm.com |
All content with label cloud+eventing+gridfs+import+infinispan+installation+jsr-107+mvcc+repeatable_read+store+transactionmanager.
Related Labels:
podcast, expiration, publish, datagrid, coherence, interceptor, server, dist, release, deadlock, intro, archetype, jbossas, lock_striping, nexus, guide, schema, listener, cache,
amazon, s3, grid, jcache, test, api, xsd, ehcache, maven, documentation, wcm, youtube, write_behind, 缓存, ec2, s, hibernate, getting, aws, interface, custom_interceptor, setup, clustering, eviction, concurrency, out_of_memory, examples, jboss_cache, index, events, configuration, hash_function, batch, buddy_replication, loader, write_through, tutorial, notification, presentation, jbosscache3x, read_committed, xml, distribution, started, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, development, permission, websocket, transaction, async, xaresource, build, gatein, searchable, demo, scala, client, migration, non-blocking, jpa, filesystem, tx, client_server, testng, infinispan_user_guide, standalone, snapshot, hotrod, webdav, docs, consistent_hash, batching, jta, faq, 2lcache, as5, docbook, jgroups, lucene, locking, rest, hot_rod
more »
( - cloud, - eventing, - gridfs, - import, - infinispan, - installation, - jsr-107, - mvcc, - repeatable_read, - store, - transactionmanager )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/cloud+eventing+gridfs+import+infinispan+installation+jsr-107+mvcc+repeatable_read+store+transactionmanager | 2019-08-17T13:57:21 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.jboss.org |
All content with label configuration+infinispan+out_of_memory+query+repeatable_read.
Related Labels:
expiration, publish, datagrid, coherence, interceptor, server, transactionmanager, release, deadlock, archetype, lock_striping, nexus, guide, schema, listener, cache, amazon, s3, grid,
jcache, test, api, xsd, ehcache, maven, documentation, wcm, write_behind, 缓存, ec2, s, hibernate, aws, getting, interface, custom_interceptor, setup, clustering, eviction, gridfs, concurrency, jboss_cache, examples, import, index,, installation, client, migration, non-blocking, filesystem, jpa, tx, gui_demo, eventing, snmp, client_server, testng, infinispan_user_guide, standalone, snapshot, hotrod, webdav, docs, batching, consistent_hash, store, jta, faq, 2lcache, as5, jsr-107, jgroups, lucene, locking
more »
( - configuration, - infinispan, - out_of_memory, - query, - repeatable_read )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/configuration+infinispan+out_of_memory+query+repeatable_read | 2019-08-17T13:33:03 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.jboss.org |
All content with label data_grid+docs+gridfs+infinispan+installation+interactive+interceptor+jcache+jsr-107+repeatable_read+replication.
Related Labels:
expiration, publish, datagrid, server, transactionmanager, dist, release, deadlock, contributor_project, archetype, jbossas, lock_striping, nexus, demos, guide, schema, listener, cache, amazon,
s3, grid, memcached,, non-blocking, migration, filesystem, jpa, tx, user_guide, gui_demo, eventing, student_project, client_server, testng, infinispan_user_guide, standalone, snapshot, webdav, hotrod, batching, consistent_hash, store, jta, faq, 2lcache, as5, lucene, jgroups, locking, rest, hot_rod
more »
( - data_grid, - docs, - gridfs, - infinispan, - installation, - interactive, - interceptor, - jcache, - jsr-107, - repeatable_read, - replication )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/data_grid+docs+gridfs+infinispan+installation+interactive+interceptor+jcache+jsr-107+repeatable_read+replication | 2019-08-17T13:20:47 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.jboss.org |
All content with label deadlock+distribution+expiration+gridfs+infinispan+query+read_committed+server.
Related Labels:
publish, datagrid, coherence, interceptor, rehash, replication, transactionmanager, release, partitioning, archetype, lock_striping, nexus, guide, schema, listener, state_transfer, cache, amazon, s3,
memcached, grid,, mvcc, tutorial, notification, murmurhash2, xml, jbosscache3x, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, br, development, websocket, transaction, async, interactive, xaresource, build, hinting, searchable, demo, cache_server, scala, installation, command-line, client, non-blocking, migration, »
( - deadlock, - distribution, - expiration, - gridfs, - infinispan, - query, - read_committed, - server )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/deadlock+distribution+expiration+gridfs+infinispan+query+read_committed+server | 2019-08-17T14:09:01 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.jboss.org |
All content with label batching+client+gridfs+gui_demo+infinispan+installation+jcache+jsr-107+repeatable_read+started+缓存.
Related Labels:
expiration, publish, datagrid, interceptor, server, transactionmanager, release, query, deadlock, archetype, lock_striping, nexus, guide, schema, listener, cache, amazon, s3, memcached,
grid, test, api, xsd, ehcache, maven, documentation, wcm, write_behind, ec2, s, hibernate, getting, aws, interface, custom_interceptor, setup, clustering, eviction, out_of_memory, concurrency, examples, jboss_cache, import, index, hash_function, batch, configuration, buddy_replication, loader, write_through, cloud, remoting, mvcc, tutorial, notification, jbosscache3x, read_committed, xml, distribution, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, development, websocket, transaction, async, interactive, xaresource, build, gatein, searchable, demo, scala, command-line, non-blocking, migration, jpa, filesystem, tx, eventing, shell, client_server, testng, infinispan_user_guide, standalone, snapshot, hotrod, webdav, docs, consistent_hash, store, jta, faq, 2lcache, as5, jgroups, lucene, locking, hot_rod
more »
( - batching, - client, - gridfs, - gui_demo, - infinispan, - installation, - jcache, - jsr-107, - repeatable_read, - started, - 缓存 )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/labels/viewlabel.action?ids=4456522&ids=4456499&ids=4456481&ids=4456547&ids=4456479&ids=4456501&ids=4456544&ids=4456542&ids=4456585&ids=75431944&ids=4456590 | 2019-08-17T14:12:47 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.jboss.org |
SPHealthAnalyzer.RegisterRules Method
Registers all the rules in an assembly with the SharePoint Health Analyzer rules list for the local farm.
Namespace: Microsoft.SharePoint.Administration.Health
Assembly: Microsoft.SharePoint (in Microsoft.SharePoint.dll)
Syntax
'Declaration Public Shared Function RegisterRules ( _ assembly As Assembly _ ) As IDictionary(Of Type, Exception) 'Usage Dim assembly As [Assembly] Dim returnValue As IDictionary(Of Type, Exception) returnValue = SPHealthAnalyzer.RegisterRules(assembly)
public static IDictionary<Type, Exception> RegisterRules( Assembly assembly )
Parameters
assembly
Type: System.Reflection.Assembly
An assembly that contains rules to add. SharePoint Health Analyzer rules are classes derived from the SPHealthAnalysisRule class.
Return Value
Type: System.Collections.Generic.IDictionary<Type, Exception>
A list of types that could not be registered and the exceptions that were thrown when registration failed.
Examples
The following example shows how to call the RegisterRules method in the FeatureActivated method of a class derived from the SPFeatureReceiver class. The example assumes that the feature receiver is in the same assembly as the rules that are being registered.
public override void FeatureActivated(SPFeatureReceiverProperties properties) { Assembly a = Assembly.GetExecutingAssembly(); IDictionary<Type, Exception> exceptions = SPHealthAnalyzer.RegisterRules(a); if (exceptions != null) { string logEntry = a.FullName; if (exceptions.Count == 0) { logEntry += " All rules were registered."; } else { foreach (KeyValuePair<Type, Exception> pair in exceptions) { logEntry += string.Format(" Registration failed for type {0}. {1}", pair.Key, pair.Value.Message); } } System.Diagnostics.Trace.WriteLine(logEntry); } }
Public Overrides Sub FeatureActivated(ByVal properties As Microsoft.SharePoint.SPFeatureReceiverProperties) Dim a As Assembly = Assembly.GetExecutingAssembly() Dim exceptions As IDictionary(Of Type, Exception) = SPHealthAnalyzer.RegisterRules(a) If Not exceptions Is Nothing Then Dim logEntry As String = a.FullName If exceptions.Count = 0 Then logEntry += " All rules were registered." Else Dim pair As KeyValuePair(Of Type, Exception) For Each pair In exceptions logEntry += String.Format(" Registration failed for type {0}. {1}", _ pair.Key, pair.Value.Message) Next End If System.Diagnostics.Trace.WriteLine(logEntry) End If End Sub
See Also
Reference
Microsoft.SharePoint.Administration.Health Namespace
Other Resources
How to: Create a Feature to Register a Health Rule | https://docs.microsoft.com/en-us/previous-versions/office/developer/sharepoint-2010/ee546727%28v%3Doffice.14%29 | 2019-08-17T14:03:00 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.microsoft.com |
Configure and use push notifications in SharePoint apps for Windows Phone
Create a solution in SharePoint Server for sending push notifications and develop a Windows Phone app for receiving the notifications. Using the Microsoft Push Notification Service (MPNS), Windows Phone apps can receive notifications through the Internet of events triggered on Microsoft SharePoint Server. The phone app doesn't have to poll the server for changes to, for example, the items in a list on which the phone app is based. The app can be registered to receive notifications from the server, and an event receiver can initiate a notification and send it to the receiving app for handling. The push notification is relayed to Windows Phone devices by MPNS.
Windows Phone 7 doesn't support running multiple apps simultaneously. Other than the components of the Windows Phone operating system (OS) itself, only one app can be running on the phone at a time. An event relevant to a given phone app might occur (such as, for example, a list item being added to a list) when the app isn't running in the foreground on the phone (that is, when the app is tombstoned or closed). You could develop a background service on the phone with a periodic task that might check for changes to the list on the server, but this approach would consume resources (such as network bandwidth and battery power) on the phone. With MPNS and the components that support notifications built into the Windows Phone 7 OS, the phone itself can receive a notification relevant to the context of a given app—even when that app isn't running—and the user can be given the opportunity to start the relevant app in response to the notification. (For more information about push notifications, see Push Notifications Overview for Windows Phone in the MSDN Library.) In this topic, you create a server-side solution for sending push notifications to a phone app based on a change in the list on which the app is based. You will then create the phone app for receiving these notifications.
Create a server-side solution to send push notifications based on a list item event
The server-side solution can be either a SharePoint app deployed in an isolated SPWeb object, or a SharePoint farm solution packaged as a SharePoint solution package (that is, a .wsp file) that contains a Web-scoped Feature. In the procedures in this section, you will develop a simple SharePoint solution that creates a target list to be used by a Windows Phone app and that activates the push notification mechanism on the server. In the subsequent section, you will develop the Windows Phone app for receiving notifications from the server-side solution.
To create the server-side project
Start Visual Studio 2012 by using the Run as Administrator option.
Choose File, New, Project.
The New Project dialog box appears.
In the New Project dialog box, expand the SharePoint node under Visual C#, and then choose the 15 node.
In the Templates pane, select SharePoint Project and specify a name for the project, such asPushNotificationsList.
Choose the OK button. The SharePoint Customization Wizard appears. This wizard enables you to select the target site for developing and debugging the project and the trust level of the solution.
Specify the URL of a SharePoint Server site. Select a site that you will be able to use later in the development of the SharePoint list app for Windows Phone.
Select Deploy as a farm solution, and then click Finish to create the project.
Next, add a class file to the project and create a couple of classes to encapsulate and manage push notifications.
To create the classes for managing push notifications
In Solution Explorer, choose the node representing the project (named PushNotificationsList if you follow the naming convention used in these procedures).
On the Project menu, choose Add Class. The Add New Item dialog box appears with the C# Class template already selected.
Specify PushNotification.cs as the name of the file and click Add. The class file is added to the solution and opened for editing.
Replace the contents of the file with the following code.
using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Net; using System.Text; using Microsoft.SharePoint; namespace PushNotificationsList { internal static class WP7Constants { internal static readonly string[] WP_RESPONSE_HEADERS = { "X-MessageID", "X-DeviceConnectionStatus", "X-SubscriptionStatus", "X-NotificationStatus" }; } public enum TileIntervalValuesEnum { ImmediateTile = 1, Delay450SecondsTile = 11, Delay900SecondsTile = 21, } public enum ToastIntervalValuesEnum { ImmediateToast = 2, Delay450SecondsToast = 12, Delay900SecondsToast = 22, } public enum RawIntervalValuesEnum { ImmediateRaw = 3, Delay450SecondsRaw = 13, Delay900SecondsRaw = 23 } public enum NotificationTypeEnum { Tile = 1, Toast = 2, Raw = 3 } class PushNotification { public PushNotificationResponse PushToast(SPPushNotificationSubscriber subscriber, string toastTitle, string toastMessage, string toastParam, ToastIntervalValuesEnum intervalValue) { // Construct toast notification message from parameter values. string" + "<wp:Toast>" + "<wp:Text1>" + toastTitle + "</wp:Text1>" + "<wp:Text2>" + toastMessage + "</wp:Text2>" + "<wp:Param>" + toastParam + "</wp:Param>" + "</wp:Toast> " + "</wp:Notification>"; return SendPushNotification(NotificationTypeEnum.Toast, subscriber, toastNotification, (int)intervalValue); } public PushNotificationResponse PushRaw(SPPushNotificationSubscriber subscriber, string rawMessage, RawIntervalValuesEnum intervalValue) { return SendPushNotification(NotificationTypeEnum.Raw, subscriber, rawMessage, (int)intervalValue); } private PushNotificationResponse SendPushNotification(NotificationTypeEnum notificationType, SPPushNotificationSubscriber subscriber, string message, int intervalValue) { // Create HTTP Web Request object. string subscriptionUri = subscriber.ServiceToken; HttpWebRequest sendNotificationRequest = (HttpWebRequest)WebRequest.Create(subscriptionUri); // MPNS expects a byte array, so convert message accordingly. byte[] notificationMessage = Encoding.Default.GetBytes(message); // Set the notification request properties. sendNotificationRequest.Method = WebRequestMethods.Http.Post; sendNotificationRequest.ContentLength = notificationMessage.Length; sendNotificationRequest.ContentType = "text/xml"; sendNotificationRequest.Headers.Add("X-MessageID", Guid.NewGuid().ToString()); switch (notificationType) { case NotificationTypeEnum.Tile: sendNotificationRequest.Headers.Add("X-WindowsPhone-Target", "token"); break; case NotificationTypeEnum.Toast: sendNotificationRequest.Headers.Add("X-WindowsPhone-Target", "toast"); break; case NotificationTypeEnum.Raw: // A value for the X-WindowsPhone-Target header is not specified for raw notifications. break; } sendNotificationRequest.Headers.Add("X-NotificationClass", intervalValue.ToString()); // Merge byte array payload with headers. using (Stream requestStream = sendNotificationRequest.GetRequestStream()) { requestStream.Write(notificationMessage, 0, notificationMessage.Length); } string statCode = string.Empty; PushNotificationResponse notificationResponse; try { // Send the notification and get the response. HttpWebResponse response = (HttpWebResponse)sendNotificationRequest.GetResponse(); statCode = Enum.GetName(typeof(HttpStatusCode), response.StatusCode); // Create PushNotificationResponse object. notificationResponse = new PushNotificationResponse((int)intervalValue, subscriber.ServiceToken); notificationResponse.StatusCode = statCode; foreach (string header in WP7Constants.WP_RESPONSE_HEADERS) { notificationResponse.Properties[header] = response.Headers[header]; } } catch (Exception ex) { statCode = ex.Message; notificationResponse = new PushNotificationResponse((int)intervalValue, subscriber.ServiceToken); notificationResponse.StatusCode = statCode; } return notificationResponse; } } /// <summary> /// Object used for returning notification request results. /// </summary> class PushNotificationResponse { private DateTime timestamp; private int notificationIntervalValue; private string statusCode = string.Empty; private string serviceToken; private Dictionary<string, string> properties; public PushNotificationResponse(int numericalIntervalValue, string srvcToken) { timestamp = DateTime.UtcNow; notificationIntervalValue = numericalIntervalValue; serviceToken = srvcToken; properties = new Dictionary<string, string>(); } public DateTime TimeStamp { get { return timestamp; } } public int NotificationIntervalValue { get { return notificationIntervalValue; } } public string StatusCode { get { return statusCode; } set { statusCode = value; } } public string ServiceToken { get { return serviceToken; } } public Dictionary<string, string> Properties { get { return properties; } } } }
- Save the file.
In this code, the PushToast and PushRaw methods take parameter arguments appropriate for the given type of notification to send, process those arguments, and then call the SendPushNotification method, which does the work of sending the notification using the Microsoft Push Notification Service. (In this sample code, a method for sending tile notifications has not been implemented.) The PushNotificationResponse class is simply a mechanism for encapsulating the result received from the notification request. Here, the class adds some information to the object (cast as an HttpWebResponse object) returned by the GetResponse method of the HttpWebRequest object. The event receiver you create in the following procedure uses this PushNotificationResponse class to update a notifications results list on the server.
Now create an event receiver class that will send push notifications to devices that have been registered to receive them. (You will bind this event receiver to the Jobs list that is created in a later procedure.)
To create the event receiver class for a list
In Solution Explorer, choose the node representing the project.
On the Project menu, click Add Class. The Add New Item dialog box appears with the C# Class template already selected.
Specify ListItemEventReceiver.cs as the name of the file and click Add. The class file is added to the solution and opened for editing.
Replace the contents of the file with the following code.
using System; using System.Security.Permissions; using System.Text; using Microsoft.SharePoint; using Microsoft.SharePoint.Utilities; namespace PushNotificationsList { /// <summary> /// List Item Events /// </summary> public class ListItemEventReceiver : SPItemEventReceiver { internal static string ResultsList = "Push Notification Results"; /// <summary> /// An item was added. /// </summary> public override void ItemAdded(SPItemEventProperties properties) { SPWeb spWeb = properties.Web; SPPushNotificationSubscriberCollection pushSubscribers = spWeb.PushNotificationSubscribers; PushNotification pushNotification = new PushNotification(); SPListItem listItem = properties.ListItem; string jobAssignment = "[Unassigned]"; // This event receiver is intended to be associated with a specific list, // but the list may not have an "AssignedTo" field, so using try/catch here. try { jobAssignment = listItem["AssignedTo"].ToString(); } catch { } PushNotificationResponse pushResponse = null; foreach (SPPushNotificationSubscriber ps in pushSubscribers) { // Send a toast notification to be displayed on subscribed phones on which the app is not running. pushResponse = pushNotification.PushToast(ps, "New job for:", jobAssignment, string.Empty, ToastIntervalValuesEnum.ImmediateToast); UpdateNotificationResultsList(spWeb, ps.User.Name, pushResponse); // Also send a raw notification to be displayed on subscribed phones on which the app is running when the item is added. pushResponse = pushNotification.PushRaw(ps, string.Format("New job for: {0}", jobAssignment), RawIntervalValuesEnum.ImmediateRaw); UpdateNotificationResultsList(spWeb, ps.User.Name, pushResponse); } base.ItemAdded(properties); } private void UpdateNotificationResultsList(SPWeb spWeb, string subscriberName, PushNotificationResponse pushResponse) { SPList resultsList = spWeb.Lists.TryGetList(ResultsList); if (resultsList == null) return; try { SPListItem resultItem = resultsList.Items.Add(); resultItem["Title"] = subscriberName; resultItem["Notification Time"] = pushResponse.TimeStamp; resultItem["Status Code"] = pushResponse.StatusCode; resultItem["Service Token"] = pushResponse.ServiceToken; StringBuilder builder = new StringBuilder(); foreach (string key in pushResponse.Properties.Keys) { builder.AppendFormat("{0}: {1}; ", key, pushResponse.Properties[key]); } resultItem["Headers"] = builder.ToString(); resultItem["Interval Value"] = pushResponse.NotificationIntervalValue; resultItem.Update(); } catch { // Could log to ULS here if adding list item fails. } } } }
- Save the file.
In this code, after an item is added to the list to which the event receiver is bound, push notifications are sent to subscribers that have registered to receive notifications. The value of the AssignedTo field from the added list item is included in the notification message sent to subscribers. For the toast notification, the values of the toastTitle parameter (for the PushToast method defined in the preceding procedure) and the toastMessage parameter are set. These values correspond to the Text1 and Text2 properties in the XML schema that defines toast notifications.
An empty string is simply being passed as the value of the toastParam parameter, which corresponds to the Param property in the XML schema for toast notifications. You could use this parameter to specify, for example, a page of the phone app to open when the user clicks the notification in the phone. In the sample phone app developed later in this topic for receiving these notifications from the server, the Param property is not used. The List form (List.xaml) in the app is simply opened when the user clicks the notification.
Note
The Param property for toast notifications is supported only in Windows Phone OS version 7.1 or greater.
For the raw notification in this sample, a string is passed that contains the value of the AssignedTo field from the added list item.
Note that the toast notification will be displayed on subscribed phones (if the phone app for which the notification is intended is not running), and the message displayed will be truncated if it is longer than approximately 41 characters. Raw notifications in MPNS are limited to 1024 bytes (1 kilobyte). (The exact number of characters that can be sent depends on the kind of encoding used, such as UTF-8). Tile notifications are also subject to size limitations. Large amounts of data can't be sent using any of the notifications types. The best use of these notifications is not as a mechanism for transferring data, but as a way to send short messages to subscribed phones so that certain actions can be taken on the phone. Those actions, such as refreshing a list on the phone with data from the server, may involve larger amounts of data, depending on the design of the Windows Phone app.
The PushNotificationResponse object that is returned from a notification request is passed to the UpdateNotificationResultsList method. This method adds information about the request to a SharePoint list named Push Notification Results (if the list exists). This is simply a demonstration of one way to use the returned object. You can put the returned object to more sophisticated uses in a production solution. You might, for example, examine the returned object for particular status codes when a notification is sent to a given user (such as the user designated for the assignment in the AssignedTo field) and take the appropriate action. In a production application, you probably wouldn't store all of this information in a list on the server. The information is being stored here to help you understand the properties associated with MPNS notifications.
Next, you create a simple SharePoint list, named Jobs, that contains a job category, a description of a job, and the person to whom the job is assigned. Also, you create an auxiliary list, named Push Notification Results, for storing information related to notification requests sent to subscribing phones.
In the following procedure, you create a class, ListCreator, that includes a CreateJobsList method for creating and configuring the Jobs list when the solution is activated on the server. The class also adds the ItemAdded event receiver (created earlier in the ListItemEventReceiver class) to the EventReceivers collection associated with the list. The ListCreator class also includes a method for creating the Push Notification Results SharePoint list.
To create a class for adding and configuring the lists
In Solution Explorer, choose the node representing the project (again, named PushNotificationsList if you follow the naming convention used in these procedures).
On the Project menu, click Add Class. The Add New Item dialog box appears with the C# Class template already selected.
Specify ListCreator.cs as the name of the file and click Add. The class file is added to the solution and opened for editing.
Replace the contents of the file with the following code.
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml; using Microsoft.SharePoint; namespace PushNotificationsList { class ListCreator { internal void CreateJobsList(SPWeb spWeb) { stringThe target website for the list.</param> /// <param name="listTitle">The title of the list.</param> /// <param name="listDescription">A description for the list.</param> /// <param name="columns">A Dictionary object containing field names and types.</param> /// <param name="replaceExistingList">Indicates whether to overwrite an existing list of the same name on the site.</param> /// <returns>A GUID for the created (or existing) list.</returns> internal Guid CreateCustomList(SPWeb spWeb, string listTitle, string listDescription, Dictionary<string, SPFieldType> columns, bool replaceExistingList) { SPList list = spWeb.Lists.TryGetList(listTitle); if (list != null) { if (replaceExistingList == true) { try { list.Delete(); } catch { return Guid.Empty; } } else { return list.ID; } } try { Guid listId = spWeb.Lists.Add(listTitle, listDescription, SPListTemplateType.GenericList); list = spWeb.Lists[listId]; SPView view = list.DefaultView; foreach (string key in columns.Keys) { list.Fields.Add(key, columns[key], false); view.ViewFields.Add(key); } list.Update(); view.Update(); return listId; } catch { return Guid.Empty; } } } }
Be sure to specify the appropriate Public Key Token value for particular your assembly. To add a tool to Visual Studio for getting the Public Key Token value for your assembly, see [How to: Create a Tool to Get the Public Key of an Assembly]() in the MSDN Library. Note that you will have to compile your project at least once to be able to get the Public Key Token value for your output assembly.
- Save the file.
In this code, the CreateJobsList method of the ListCreator class creates the list (or gets the list if it exists on the server) and binds the event receiver created in an earlier procedure to the list by adding it to the EventReceivers class associated with the list. The CreateNotificationResultsList method creates the Push Notification Results list.
Next you add a Feature to your project in order to perform initialization operations on the server when your solution is deployed and activated. You add an event receiver class to the Feature to handle the FeatureActivated and FeatureDeactivating events.
To add a Feature to your project
In Visual Studio 2012, on the View menu, point to Other Windows and then click Packaging Explorer.
In the Packaging Explorer, right-click the node representing your project and click Add Feature. A new Feature (named "Feature1" by default) is added to your project, under a Features node (in Solution Explorer).
Now, in Solution Explorer, under the Features node, right-click the newly added Feature (that is, Feature1), and click Add Event Receiver. An event receiver class file (Feature1.EventReceiver.cs) is added to the Feature and opened for editing.
Within the implementation (demarcated by opening and closing braces) of the Feature1EventReceiver class, add the following code.
internal const string PushNotificationFeatureId = "41E1D4BF-B1A2-47F7-AB80-D5D6CBBA3092";
This string variable stores the identifier for the Push Notification Feature on the server. > **Tip:** > You can obtain a list of unique identifiers for the Features on a SharePoint Server by executing the following Windows PowerShell cmdlet: > `Get-SPFeature | Sort -Property DisplayName`> The Push Notification Feature appears as "PhonePNSubscriber" in the results returned by this cmdlet.
- The event receiver class file is created with some default method declarations for handling Feature events. The method declarations in the file are initially commented out. Replace the FeatureActivated method in the file with the following code.
public override void FeatureActivated(SPFeatureReceiverProperties properties) { base.FeatureActivated(properties); SPWeb spWeb = (SPWeb)properties.Feature.Parent; ListCreator listCreator = new ListCreator(); listCreator.CreateJobsList(spWeb); listCreator.CreateNotificationResultsList(spWeb); // Then activate the Push Notification Feature on the server. // The Push Notification Feature is not activated by default in a SharePoint Server installation. spWeb.Features.Add(new Guid(PushNotificationFeatureId), false); }
- Replace the FeatureDeactivating method in the file with the following code.
public override void FeatureDeactivating(SPFeatureReceiverProperties properties) { base.FeatureDeactivating(properties); SPWeb spWeb = (SPWeb)properties.Feature.Parent; // Deactivate the Push Notification Feature on the server // when the PushNotificationsList Feature is deactivated. spWeb.Features.Remove(new Guid(PushNotificationFeatureId), false); }
- Save the file.
In the implementation of the FeatureActivated event handler here, an instance of the ListCreator class is instantiated and its CreateJobsList and CreateNotificationResultsList methods are called, using the SPWeb where the Feature is deployed and activated as the location in which the lists will be created. In addition, because push notification functionality is not enabled by default in a standard installation of SharePoint Server, the event handler activates the Push Notification Feature on the server. In the FeatureDeactivating event handler, push notification functionality is deactivated when the application has been deactivated. It isn't necessary to handle this event. You may or may not want to deactivate push notifications on the server when the application is deactivated, depending on the circumstances of your installation and whether other applications on the target site make use of push notifications.
Create a Windows Phone SharePoint list app to receive push notifications
In this section, you create a Windows Phone app from the Windows Phone SharePoint List Application template, specifying the SharePoint list created in the preceding section as the target list for the app. You then develop a Notifications class for subscribing to push notifications, implementing handlers for notification events, and storing information related to notifications on the phone. You also add a XAML page to your app with controls to allow users to register or unregister for push notifications.
To follow the procedures in this section, first perform the steps in the procedure described in How to: Create a Windows Phone SharePoint list app to create a Visual Studio project from the Windows Phone SharePoint List Application template, using the Jobs list created in the preceding section as the target SharePoint list for the project. For the purposes of the procedures in this section, it is assumed that the name specified for the project isSPListAppForNotifications.
To create the class for managing subscriptions and received notifications
In Solution Explorer, choose the node representing the project (named SPListAppForNotifications).
On the Project menu, click Add Class. The Add New Item dialog box appears with the C# Class template already selected.
Specify "Notifications.cs" as the name of the file and click Add. The class file is added to the solution and opened for editing.
Replace the contents of the file with the following code.
using System; using System.Linq; using System.Net; using System.Windows; using Microsoft.Phone.Notification; using Microsoft.SharePoint.Client; using System.Diagnostics; using System.Collections.Generic; using Microsoft.Phone.Shell; using System.IO; using System.IO.IsolatedStorage; namespace SPListAppForNotifications { public class Notifications { static HttpNotificationChannel httpChannel; private const string RegStatusKey = "RegistrationStatus"; public static string DeviceAppIdKey = "DeviceAppInstanceId"; public static string ChannelName = "JobsListNotificationChannel"; public static ClientContext Context { get; set; } public static void OpenNotificationChannel(bool isInitialRegistration) { try { // Get channel if it was created in a previous session of the app. httpChannel = HttpNotificationChannel.Find(ChannelName); // If channel is not found, create one. if (httpChannel == null) { httpChannel = new HttpNotificationChannel(ChannelName); // Add event handlers. When the Open method is called, the ChannelUriUpdated event will fire. // A call is made to the SubscribeToService method in the ChannelUriUpdated event handler. AddChannelEventHandlers(); httpChannel.Open(); } else { // The channel exists and is already open. Add handlers for channel events. // The ChannelUriUpdated event won't fire in this case. AddChannelEventHandlers(); // If app instance is registering for first time // (instead of just starting up again), then call SubscribeToService. if (isInitialRegistration) { SubscribeToService(); } } } catch (Exception ex) { ShowMessage(ex.Message, "Error Opening Channel"); CloseChannel(); } } private static void AddChannelEventHandlers() { httpChannel.ChannelUriUpdated += new EventHandler<NotificationChannelUriEventArgs>(httpChannel_ChannelUriUpdated); httpChannel.ErrorOccurred += new EventHandler<NotificationChannelErrorEventArgs>(httpChannel_ExceptionOccurred); httpChannel.ShellToastNotificationReceived += new EventHandler<NotificationEventArgs>(httpChannel_ShellToastNotificationReceived); httpChannel.HttpNotificationReceived += new EventHandler<HttpNotificationEventArgs>(httpChannel_HttpNotificationReceived); } private static void httpChannel_ChannelUriUpdated(object sender, NotificationChannelUriEventArgs e) { UpdateChannelUriOnServer(); SubscribeToService(); } private static void httpChannel_ExceptionOccurred(object sender, NotificationChannelErrorEventArgs e) { // Simply showing the exception error. ShowMessage(e.Message, "Channel Event Error"); } static void httpChannel_ShellToastNotificationReceived(object sender, NotificationEventArgs e) { if (e.Collection != null) { Dictionary<string, string> collection = (Dictionary<string, string>)e.Collection; ShellToast toast = new ShellToast(); toast.Title = collection["wp:Text1"]; toast.Content = collection["wp:Text2"]; // Note that the Show method for a toast notification won't // display the notification in the UI of the phone when the app // that calls the method is running (as the foreground app on the phone). // toast.Show(); //Toast and Raw notification will be displayed if user is running the app. Be default only Toast notification // will be displayed when the app is tombstoned // Showing the toast notification with the ShowMessage method. ShowMessage(string.Format("Title: {0}\\r\\nContent: {1}", toast.Title, toast.Content), "Toast Notification"); } } static void httpChannel_HttpNotificationReceived(object sender, HttpNotificationEventArgs e) { Stream messageStream = e.Notification.Body; string message = string.Empty; // Replacing NULL characters in stream. using (var reader = new StreamReader(messageStream)) { message = reader.ReadToEnd().Replace('\\0', ' '); } // Simply displaying the raw notification. ShowMessage(message, "Raw Notification"); } private static void SubscribeToService() { Guid deviceAppInstanceId = GetSettingValue<Guid>(DeviceAppIdKey, false); Context.Load(Context.Web, w => w.Title, w => w.Description); PushNotificationSubscriber pushSubscriber = Context.Web.RegisterPushNotificationSubscriber(deviceAppInstanceId, httpChannel.ChannelUri.AbsoluteUri); Context.Load(pushSubscriber); Context.ExecuteQueryAsync ( (object sender, ClientRequestSucceededEventArgs args) => { SetRegistrationStatus(true); // Indicate that tile and toast notifications can be // received by phone shell when phone app is not running. if (!httpChannel.IsShellTileBound) httpChannel.BindToShellTile(); if (!httpChannel.IsShellToastBound) httpChannel.BindToShellToast(); ShowMessage( string.Format("Subscriber successfully registered: {0}", pushSubscriber.User.LoginName), "Success"); }, (object sender, ClientRequestFailedEventArgs args) => { ShowMessage(args.Exception.Message, "Error Subscribing"); }); } private static void UpdateChannelUriOnServer() { Guid deviceAppInstanceId = GetSettingValue<Guid>(DeviceAppIdKey, false); Context.Load(Context.Web, w => w.Title, w => w.Description); PushNotificationSubscriber subscriber = Context.Web.GetPushNotificationSubscriber(deviceAppInstanceId); Context.Load(subscriber); Context.ExecuteQueryAsync( (object sender1, ClientRequestSucceededEventArgs args1) => { subscriber.ServiceToken = httpChannel.ChannelUri.AbsolutePath; subscriber.Update(); Context.ExecuteQueryAsync( (object sender2, ClientRequestSucceededEventArgs args2) => { ShowMessage("Channel URI updated on server.", "Success"); }, (object sender2, ClientRequestFailedEventArgs args2) => { ShowMessage(args2.Exception.Message, "Error Upating Channel URI"); }); }, (object sender1, ClientRequestFailedEventArgs args1) => { // This condition can be ignored. Getting to this point means the subscriber // doesn't yet exist on the server, so updating the Channel URI is unnecessary. //ShowMessage("Subscriber doesn't exist on server.", "DEBUG"); }); } public static void UnSubscribe() { Context.Load(Context.Web, w => w.Title, w => w.Description); Guid deviceAppInstanceId = GetSettingValue<Guid>(DeviceAppIdKey, false); Context.Web.UnregisterPushNotificationSubscriber(deviceAppInstanceId); Context.ExecuteQueryAsync ( (object sender, ClientRequestSucceededEventArgs args) => { CloseChannel(); SetRegistrationStatus(false); //SetInitializationStatus(false); ShowMessage("Subscriber successfully unregistered.", "Success"); }, (object sender, ClientRequestFailedEventArgs args) => { ShowMessage(args.Exception.Message, "Error Unsubscribing"); }); } public static void ClearSubscriptionStore() { Context.Load(Context.Web, w => w.Title, w => w.Description); List subscriptionStore = Context.Web.Lists.GetByTitle("Push Notification Subscription Store"); Context.Load(subscriptionStore); ListItemCollection listItems = subscriptionStore.GetItems(new CamlQuery()); Context.Load(listItems); Context.ExecuteQueryAsync ( (object sender1, ClientRequestSucceededEventArgs args1) => { foreach (ListItem listItem in listItems.ToList()) { listItem.DeleteObject(); } Context.ExecuteQueryAsync( (object sender2, ClientRequestSucceededEventArgs args2) => { // Close channel if open and set registration status for current app instance. CloseChannel(); SetRegistrationStatus(false); ShowMessage("Subscriber store cleared.", "Success"); }, (object sender2, ClientRequestFailedEventArgs args2) => { ShowMessage(args2.Exception.Message, "Error Deleting Subscribers"); }); }, (object sender1, ClientRequestFailedEventArgs args1) => { ShowMessage(args1.Exception.Message, "Error Loading Subscribers List"); }); } private static void CloseChannel() { if (httpChannel == null) return; try { httpChannel.UnbindToShellTile(); httpChannel.UnbindToShellToast(); httpChannel.Close(); } catch (Exception ex) { ShowMessage(ex.Message, "Error Closing Channel"); } } public static void SaveDeviceAppIdToStorage() { if (!IsolatedStorageSettings.ApplicationSettings.Contains(DeviceAppIdKey)) { Guid DeviceAppId = Guid.NewGuid(); SetSettingValue<Guid>(DeviceAppIdKey, DeviceAppId, false); } } public static bool GetRegistrationStatus() { bool status = GetSettingValue<bool>(RegStatusKey, false); return status; } private static void SetRegistrationStatus(bool isRegistered) { SetSettingValue<bool>(RegStatusKey, isRegistered, false); } private static T GetSettingValue<T>(string key, bool fromTransientStorage) { if (fromTransientStorage == false) { if (IsolatedStorageSettings.ApplicationSettings.Contains(key)) return (T)IsolatedStorageSettings.ApplicationSettings[key]; return default(T); } if (PhoneApplicationService.Current.State.ContainsKey(key)) return (T)PhoneApplicationService.Current.State[key]; return default(T); } private static void SetSettingValue<T>(string key, T value, bool toTransientStorage) { if (toTransientStorage == false) { if (IsolatedStorageSettings.ApplicationSettings.Contains(key)) IsolatedStorageSettings.ApplicationSettings[key] = value; else IsolatedStorageSettings.ApplicationSettings.Add(key, value); IsolatedStorageSettings.ApplicationSettings.Save(); } else { if (PhoneApplicationService.Current.State.ContainsKey(key)) PhoneApplicationService.Current.State[key] = value; else PhoneApplicationService.Current.State.Add(key, value); } } // Method for showing messages on UI thread coming from a different originating thread. private static void ShowMessage(string message, string caption) { Deployment.Current.Dispatcher.BeginInvoke(() => { MessageBox.Show(message, caption, MessageBoxButton.OK); }); } } }
- Save the file.
In this code, the OpenNotificationChannel creates a notification channel for receiving notifications from MPNS. Event handlers are attached to the channel object for dealing with notification events, and then the channel is opened. In this sample, the HttpNotificationReceived event (for receiving raw notifications) is implemented. Raw notifications can be received only when the phone app is running. The handler for the ShellToastNotificationReceived event (for receiving toast notifications) is also implemented here to demonstrate its use. Tile notifications can be received only when the subscribing phone app is not running, so there's no need to implement an event handler in the app for receiving tile notifications.
The SubscribeToService method executes the RegisterPushNotificationSubscriber method of the SPWeb object asynchronously (passing a value to identify the phone app and a URI value associated with the notification channel) to register with the SharePoint Server to receive push notifications. If the registration is successful, the Windows Phone shell is set to receive (and display) toast and tile notifications on the particular notification channel registered with the SharePoint Server when the phone app itself is not running.
The UnSubscribe method in this code calls the UnregisterPushNotificationSubscriber method of the SPWeb object. The development guidelines for Windows Phone apps recommend that users be allowed to choose whether to subscribe to push notifications or not. In a later procedure, you will add a mechanism for the user to register or unregister for notifications and that registration state is preserved between sessions of the app, making it unnecessary to ask to register every time the app is started. The GetRegistrationStatus method is made available so that the phone app can determine whether the user has registered (in an earlier session) to receive push notifications and the notification channel is subsequently opened. The SaveDeviceAppIdToStorage saves the identifier (represented as a GUID) for the app instance on a given Windows Phone to isolated storage.
The ClearSubscriptionStore method is included here as a demonstration of one way of clearing the subscribers from the subscription store on the SharePoint Server. Subscribers to push notifications are stored in a SharePoint list named "Push Notification Subscription Store". A button for calling this method of the Notifications class is added to the notifications settings page added to the app in a later procedure.
Note that the operations that involve accessing the SharePoint Server to configure settings or prepare for notifications (such as the RegisterPushNotificationSubscriber method) can take time to complete, depending on the conditions of the network and the accessibility of the server. These operations are therefore executed asynchronously (specifically, by using the ExecuteQueryAsync method of a ClientContext object) to allow the app to continue other processes and to keep the UI responsive to the user.
Next, add a page to the app with controls that allow a user to register for or unregister from push notifications from the server.
To add a notification settings page to the app
In Solution Explorer, choose the node representing the project (named SPListAppForNotifications if you follow the naming convention in these procedures).
On the Project menu, click Add New Item. The Add New Item dialog box appears.
In the Templates pane, choose Windows Phone Portrait Page template. SpecifySettings.xaml as the name of the file for the page and click Add. The page is added to the project and opened for editing.
In the XAML view for the page, replace the content between the closing bracket of the XML tag that defines the PhoneApplicationPage element and the closing tag of the element (
</phone:PhoneApplicationPage>), with the following markup.
="JOBS LIST""> <StackPanel Margin="0,5,0,5"> <StackPanel Orientation="Vertical" Margin="0,5,0,5"> <TextBlock TextWrapping="Wrap" HorizontalAlignment="Center" Style="{StaticResource PhoneTextTitle2Style}">Notification Registration</TextBlock> <StackPanel Orientation="Vertical" Margin="0,5,0,5"> <TextBlock x: <Button x: <Button x: </StackPanel> </StackPanel> <StackPanel Orientation="Vertical" Margin="0,5,0,5"> <TextBlock TextWrapping="Wrap" HorizontalAlignment="Center" Style="{StaticResource PhoneTextTitle2Style}">Subscriber Management</TextBlock> <Button x: </StackPanel> </StackPanel> </Grid> </Grid> <!--Sample code showing usage of ApplicationBar--> <phone:PhoneApplicationPage.ApplicationBar> <shell:ApplicationBar <shell:ApplicationBarIconButton x: </shell:ApplicationBar> </phone:PhoneApplicationPage.ApplicationBar>
With the Settings.xaml file selected in Solution Explorer, press F7 to open its associated code-behind file, Settings.xaml.cs, for editing.
Replace the contents of the code-behind file with the following code.
using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Windows; using Microsoft.Phone.Controls; using Microsoft.SharePoint.Client; namespace SPListAppForNotifications { public partial class Settings : PhoneApplicationPage { private const string RegisteredYesText = "Registered: Yes"; private const string RegisteredNoText = "Registered: No"; public Settings() { InitializeComponent(); } protected override void OnNavigatedTo(System.Windows.Navigation.NavigationEventArgs e) { this.txtRegistrationStatus.Text = (Notifications.GetRegistrationStatus()) ? RegisteredYesText : RegisteredNoText; } private void OnOKButtonClick(object sender, EventArgs e) { NavigationService.Navigate(new Uri("/Views/List.xaml", UriKind.Relative)); } private void OnRegisterButtonClick(object sender, RoutedEventArgs e) { Notifications.OpenNotificationChannel(true); // Navigating back to List form. User will be notified when process is complete. NavigationService.Navigate(new Uri("/Views/List.xaml", UriKind.Relative)); } private void OnUnregisterButtonClick(object sender, RoutedEventArgs e) { Notifications.UnSubscribe(); // Navigating back to List form. User will be notified when process is complete. NavigationService.Navigate(new Uri("/Views/List.xaml", UriKind.Relative)); } private void OnDeleteSubscribersButtonClick(object sender, RoutedEventArgs e) { Notifications.ClearSubscriptionStore(); // Navigating back to List form. User will be notified when process is complete. NavigationService.Navigate(new Uri("/Views/List.xaml", UriKind.Relative)); } } }
Save the file.
To add to your project the image file (appbar.check.rest.png) for the ApplicationBar button (btnOK) declared in the Settings.xaml file, choose the Images folder node in Solution Explorer.
On the Project menu, click Add Existing Item. A File Browser window opens.
Navigate to the folder in which the standard Windows Phone icon images were installed by the Windows Phone SDK 7.1.
Note
The images with a light foreground and a dark background are in %PROGRAMFILES%(x86)\Microsoft SDKs\Windows Phone\v7.1\Icons\dark in a standard installation of the SDK.
Choose the image file named appbar.check".
Next, add a button to the List form (List.xaml) in the project and implement the Click event handler of the button to navigate to the Settings page created in the preceding steps. Also modify the OnViewModelInitialization event handler to open a notification channel (if the user has chosen to subscribe to push notifications).
To modify the List form
In Solution Explorer, under the Views node, double-click the List.xaml file. The file is opened for editing.
Add markup to declare an additional button in the ApplicationBar element of the file, as in the following example.
... <phone:PhoneApplicationPage.ApplicationBar> <shell:ApplicationBar <shell:ApplicationBarIconButton x: <shell:ApplicationBarIconButton x: <shell:ApplicationBarIconButton x: </shell:ApplicationBar> </phone:PhoneApplicationPage.ApplicationBar> ...
With the List.xaml file selected in Solution Explorer, press F7 to open its associated code-behind file, List.xaml.cs, for editing.
Within the code block (demarcated by opening and closing braces) that implements the ListForm partial class, add the following event handler to the file.
private void OnSettingsButtonClick(object sender, EventArgs e) { NavigationService.Navigate(new Uri("/Settings.xaml", UriKind.Relative)); }
- Locate the OnViewModelInitialization in the List.xaml.cs file and add a call to the OpenNotificationChannel method of the Notifications class created earlier. The modified implementation of the handler should resemble the following code.
private void OnViewModelInitialization(object sender, InitializationCompletedEventArgs e) { this.Dispatcher.BeginInvoke(() => { //If initialization has failed, show error message and return if (e.Error != null) { MessageBox.Show(e.Error.Message, e.Error.GetType().Name, MessageBoxButton.OK); return; } App.MainViewModel.LoadData(((PivotItem)Views.SelectedItem).Name); this.DataContext = (sender as ListViewModel); }); // Open notification channel here if user has chosen to subscribe to notifications. if (Notifications.GetRegistrationStatus() == true) Notifications.OpenNotificationChannel(false); }
Save the file.
To add to your project the image file (appbar.feature.settings.rest.png) for the ApplicationBar button (btnSettings) declared in the List.xaml file, choose the Images folder node in Solution Explorer.
On the Project menu, click Add Existing Item. A File Browser window opens.
Navigate to the folder in which the standard Windows Phone icon images were installed by the Windows Phone SDK 7.1. (See the note in the previous procedure for the location of the image files in a standard installation of the SDK.)
Choose the image file named appbar.feature.settings".
Finally, add code to the Application_Launching event hander in the App.xaml.cs file to prepare the app for receiving push notifications, using properties and methods of the Notifications class created earlier.
To add code to the App.xaml.cs file
In Solution Explorer, under the node representing the project, choose the App.xaml file.
Press F7 to open its associated code-behind file, App.xaml.cs, for editing.
Locate the Application_Launching event handler in the file. (For new projects created from the Windows Phone SharePoint List Application template, the signature for the method that handles the Application_Launching event is included but no logic is implemented in the method.)
Replace the Application_Launching event handler with the following code.
private void Application_Launching(object sender, LaunchingEventArgs e) { // Get set up for notifications. Notifications.Context = App.DataProvider.Context; Notifications.SaveDeviceAppIdToStorage(); }
- Save the file.
If you compile the project and deploy the app to the Windows Phone Emulator to run it, you can click the Settings button on the Application Bar to display a page from which you can register for push notifications (Figure 1).
Figure 1. Settings page for notification registration
If you have deployed and activated the PushNotificationsList solution (developed in the section Create a server-side solution to send push notifications based on a list item event earlier in this topic) to your target SharePoint Server, and if registration from the phone for notifications is successful, you can add an item to the Jobs list on the server and you should receive both a toast notification (Figure 2) and, if the app is running on the phone when the item is added to the list, a raw notification (Figure 3).
Figure 2. Toast notification (app running)
The message displayed when your app received a toast notification while it's running depends on how you've implemented the ShellToastNotificationReceived event handler in your app. In the Notifications class for this sample, the title and content of the message are simply displayed to the user.
Figure 3. Raw notification
If the app is not running when the item is added to the list, the phone should still display a toast notification (Figure 4).
Figure 4. Toast notification (app not running)
When you add an item to the Jobs SharePoint list, the code in the event receiver associated with the list attempts to send notifications using MPNS to subscribed phones, but, depending on network conditions and other factors, a given notification may not be received by a phone. You can look at the Push Notification Results list on the server, especially the values in the Status Code and Headers columns, to determine the status and results related to individual notifications.
See also
Feedback | https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-configure-and-use-push-notifications-in-sharepoint-apps-for-windows | 2019-08-17T12:38:17 | CC-MAIN-2019-35 | 1566027313259.30 | [array(['../images/bee8bbc5-a93d-4695-927b-c10e0e63ddf9.gif',
'Settings page for notification registration'], dtype=object)
array(['../images/19b38f1b-8f98-4e91-8384-ba0e2d3da744.gif',
'Toast notification (app running)'], dtype=object)
array(['../images/2e5dc58a-55d2-48c6-a162-974199ff5e5c.gif',
'Raw notification'], dtype=object)
array(['../images/c046bc5c-1e31-4ac6-9cba-05482cc36915.gif',
'Toast notification (app not running)'], dtype=object)] | docs.microsoft.com |
. as part of a full
timechart query, details about the
stats command, see stats in the Search Reference.
For details about the
timechart command, see timechart in the Search Reference.
Queries to generate a sparkline and trend indicator! | https://docs.splunk.com/Documentation/Splunk/6.6.5/Viz/SingleValueGenerate | 2019-08-17T13:50:54 | CC-MAIN-2019-35 | 1566027313259.30 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
This Builder exposes a
dist directory built from either of two methods:
- Using a
package.jsonfile, the Builder installs its dependencies and executes the
now-buildscript within, expecting a
distdirectory as the build output.
- Using a shell file, the Builder executes each of the instructions within, expecting an output of a
distdirectory as the build output.
The default expectation of the
dist directory as the build output can be configured to be another directory.
When to Use It
This Builder is very useful when you have source files that when built will be served statically, but instead of pre-building yourself locally, or in a CI, Now will build it in the cloud every time you deploy.
@now/staticBuilder instead.
How to Use It
First, create a project directory where your source code will reside. This can be the root directory for your project or a sub-directory inside of a monorepo.
In this example, we will show you how to use
@now/static-build to build a Next.js project statically and deploy it, with one command.
The following example will focus on configuring Now to deploy an existing project.
See the Local Development section for more information on using this builder using
now dev.
Static Builder Configuration for Next.js
Next.js uses npm dependencies, so to ensure a fresh build for each deployment, make sure both
node_modules and
dist are in the
.nowignore file:
node_modules dist
Contents of a
.nowignore file that prevents the
node_modules and
dist directories from uploading.
Finally, for Now to know what to execute, extend the
scripts object in your
package.json with the following
now-build property:
{ ... "scripts": { ... "now-build": "next build && next export -o dist" } }
An extended
scripts object containing a
now-build script with build instructions.
Notice that
@now/static-build:
- Uses
package.jsonas a way of getting dependencies and specifying your build.
- Will only execute your
now-buildscript.
- Expects the output to go to
dist, or the directory defined as the value of the config option
distDir.
Finally, tell Now that you want to execute a build for this deployment by creating a
now.json file with the following contents:
{ "version": 2, "builds": [{ "src": "package.json", "use": "@now/static-build" }] }
A
now.json file defining the Now platform version and a build step using the
@now/static-build Builder.
Deployment
With the Builder configured in your project, you can deploy your project with Now whenever you update your code and it will be built fresh each time.
Once deployed, you will receive a URL of your built project similar to the following:
The example deployment above is open-source and you can view the code for it here:
Configuring the Build Output Directory
If you want to configure a directory other than
dist for your build output, you can pass an optional
distDir option in the builder's config:
{ "version": 2, "builds": [ { "src": "package.json", "use": "@now/static-build", "config": { "distDir": "www" } } ] }
Monorepo Usage
The
@now/static-build Builder can be used for multiple build steps in the same project. To build two Next.js websites, for example one in the root of the project and another in the
docs directory, the following configuration would be used in a
now.json file:
{ "version": 2, "builds": [ { "src": "package.json", "use": "@now/static-build" }, { "src": "docs/package.json", "use": "@now/static-build" } ] }
Local Development
With the
@now/static-build Builder, you are able to define a custom script that runs specified functionality along with the serverless replication that
now dev provides.
To do this, you can provide a
now-dev script within a
package.json file that defines the behavior of the
now dev command. Doing so will not affect the functionality that the command provides, including mimicking the serverless environment locally.
For example, if you would like to develop a Gatsby site locally, using a
now.json file such as the following:
{ "version": 2, "builds": [{ "src": "package.json", "use": "@now/static-build" }] }
Your
package.json file can define a
now-dev script that uses the Gatsby CLI to run a development environment locally:
{ ... "scripts": { ... "now-dev": "gatsby develop -p $PORT" } }
As you can see with the above example, we also pass a
$PORT variable to the command.
now dev will automatically pass the port it's looking to use with the
$PORT variable for you to use in your custom scripts, so the command can attach the extended behaviour to the environment.
Technical Details
Entry Point
The
src value of each build step that uses the
@now/static-build Builder can be either a
package.json file with a
now-build script or a shell file.
package.json Entry Point
The
package.json entry point must contain a script within a
scripts property called
now-build. The value of this script must contain the instructions for how Now should build the project, with the result of the build's final output in the
dist directory (unless configured).
For example, building a statically exported Next.js project:
{ ... "scripts": { "now-build": "next build && next export -o dist" } }
An example
scripts property in a
package.json file used to statically build an export Next.js.
This file accompanies a
now.json file with a build step that points to the
package.json file as an entry point:
{ "version": 2, "builds": [{ "src": "package.json", "use": "@now/static-build" }] }
An example
now.json file that uses the above
package.json file as an entry point.
Shell File Entry Point
Using a file with a
.sh file extension as an entry point for the
@now/static-build Builder allows you to define build instructions in a shell file.
For example, creating a shell file, with the name
build.sh, with the following contents will install and build a Hugo project:
curl -L -O tar -xzf hugo_0.55.6_Linux-64bit.tar.gz ./hugo
Example shell file that installs and builds a Hugo project.
The
build.sh file accompanies a
now.json file with a build step that points to the
build.sh file as an entry point:
{ "version": 2, "builds": [ { "src": "build.sh", "use": "@now/static-build", "config": { "distDir": "public" } } ] }
now.jsonfile that defines the entry point of a build step, using
@now/static-build, as the above
build.shshell file.
By default, Hugo builds to the
public directory, so the
config property contains an extra
distDir option that sets the output directory for Now to look in after the build instructions of the
build.sh file are finished.
Now will look for the
dist directory by default.
Dependencies Installation
The installation algorithm of dependencies works as follows:
For
package.json Entry Points
If a
package.json is used as an entry point, the following behavior applies for dependencies listed inside of it:
- If a
package-lock.jsonis present in the project,
npm installis used.
- Otherwise,
yarnis used, by default.
For Shell Entry Points
yum is fully available to install any dependencies from within a shell entry point file.
For example, installing
wget from a shell entry point and using it to install Hugo, then unpacking Hugo with
tar, which is built-in:
yum install -y wget wget tar -xzf hugo_0.31.1_Linux-64bit.tar.gz
Private npm Modules
To install private npm modules, define
NPM_TOKEN as a build environment in
now.json.
Node.js Version
The Node.js version used is the latest in the 8 branch.
Resources
The goal of the ZEIT Docs is to provide you with all the information you need to easily deploy your projects. The following resources are related to this document, and can help you with your goals:
Guide: Deploy Hugo with Now →
Learn how to setup and deploy a Hugo project with Now and add caching headers to it.
Guide: Deploy Vue.js with Now →
Setup and deploy a Vue.js project with Now with caching headers for extra speed.
Guide: Deploy Gatsby with Now →
Setup and deploy a Gatsby project with Now with caching headers for extra speed.
Document: Builds Overview →
Learn all about build steps and what you can achieve with your Now-deployed projects. | https://docs-560461g10.zeit.sh/docs/v2/deployments/official-builders/static-build-now-static-build/ | 2019-08-17T13:30:33 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs-560461g10.zeit.sh |
Shopping Cart Elite
Overview
This guide describes how to integrate Affirm into your Shopping Cart Elite platform so that you can provide Affirm as a payment option to your customers. After integrating Affirm, your Shopping Cart Elite site will:
- Offer Affirm as payment option on the checkout page
- Process Affirm charges in your order management system
- Display Affirm promotional messaging
The integration steps are:
- Contact Shopping Cart Elite
- Configure Affirm as a payment method
- Add Affirm promotional messaging
- Review your order management functions
- Test your integration
- Set Live. Contact Shopping Cart Elite
To use Affirm with Shopping Cart Elite, contact your Shopping Cart Elite account manager to have them enable it for you.
2. Configure Affirm as a payment method
Your Shopping Cart Elite account manager will assist with enabling Affirm as a payment method.
- Go to Settings > Payment > Affirm
- Check the Accept Affirm box
- Enter the API Key (Affirm public key) and Secret Key (Affirm private key) you retrieved from the sandbox merchant dashboard
- Enter the dollar amount values for Minimum Amount that displays Affirm as a payment option to your customers when checking out (optional)
- Check the Test Environment box. Contact Shopping Cart Elite for assistance setting up your Affirm promotional messaging.
4. Review your order management functions
Processing orders (authorize, void, refund, and partial refund) in Shopping Cart Elite updates the order status in the Affirm dashboard. While you can process orders in the dashboard, we strongly recommend using Shopping Cart Elite to keep order status synced with Affirm. For more information on processing orders in Shopping Cart Elite, refer to their documentation.
5. Test your integration
After configuring Affirm, contact your Affirm Client Success Manager to place a test order.
6. Set Live
After testing, your Shopping Cart Elite account manager will need to assist in setting your store to our live environment.
- Retrieve your live API keys at
- Go to Settings > Payment > Affirm
- Enter the API Key (Affirm public key) and Secret Key (Affirm private key) you retrieved from the live merchant dashboard
- Uncheck the Test Environment box | https://docs.affirm.com/Integrate_Affirm/Platform_Integration/Shopping_Cart_Elite | 2019-08-17T14:06:49 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.affirm.com |
Rocky Series (3.3.0 - 3.3.x) Release Notes¶
3.3.2-3¶
3.3.3.1¶
3.3.0¶
New Features¶
Adds the ability for the Bare Metal service conductor service to explicitly choose to disable ATA Secure Erase from being executed.
If a PReP boot partition is created, and the machine being deployed to is of ppc64le architecture, the grub2 bootloader will be installed directly there. This enables booting partition images locally on ppc64* hardware.
Using this feature requires
ironic-libversion 2.14 as support to create the PReP partition was introduced there.
Bug Fixes¶
Adds an additional check if the
smartctlutility is present from the
smartmontoolspackage, which performs an ATA disk specific check that should prevent ATA Secure Erase from being performed if a pass-thru device is detected that requires a non-ATA command signling sequence. Devices such as these can be smart disk interfaces such as RAID controllers and USB disk adapters, which can cause failures when attempting to Secure Erase, which may render the disk unreachable.
Fixes the ATA Secure Erase logic to attempt an immediate unlock in the event of a failed attempt to Secure Erase. This is required to permit fallback to make use of the
shreddisk utility.
In the event that an ATA Secure Erase operation fails during cleaning, the disk will be write locked. In this case, the disk must be explicitly unlocked.
This should also prevent failures where an ATA Secure Erase operation fails with a pass-through disk controller, which may prevent the disk from being available after a reboot operation. For additional information, please see story 2002546.
Fixes an issue where the secure erase cleaning step would fail if a drive was left in the security-enabled state. This could occur if a previous attempt to perform a secure erase had failed e.g if there was a power failure mid-way through the cleaning proccess. See Story 2001762 and Story 2001763 for details.
Adds support to collect the default IPv6 address for interfaces. With different networking environment, the address could be a link-local address, ULA or something else.
Fixes bug where TinyIPA fails to acquire IP address when in RESCUE state and in multi-tenant environment. | https://docs.openstack.org/releasenotes/ironic-python-agent/rocky.html | 2019-08-17T12:39:44 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.openstack.org |
JavaScript
The JS HTTP Client was designed to work in the browser, using Node.js, or in Electron.
To use the JS HTTP Client, you'll need either a locally running Daemon or a locally running Desktop Tray instance containing a Textile account and available on
localhost with default ports for both the
API and
Gateway.
Installation¶
The JS HTTP Client is published as an NPM Package. You are free to use the source code directly or use a package manager to install the official release.
NPM¶
npm install @textile/js-http-client
Yarn¶
yarn add @textile/js-http-client
TypeScript¶
We strongly recommend using TypeScript if you plan to use the JS Client in your project. The JS Client
Initialize the JS Client¶
The JS HTTP Client does not maintain any state for your app, it simply provides easy to use APIs so your app can interact with your user's Textile account. You import the library in any library simply.
// No initialization needed, only import import textile from "@textile/js-http-client"
Below are some basic examples to get you started. If you are interested in a more thorough walk-through, check out the Tour of Textile's examples using the JS HTTP Client.
Get the account display name¶
const name = await textile.profile.name()
// The js-http-client returns a `ReadableStream` to be accessed by the caller // See for details const stream = await textile.subscribe.stream("files") const reader = stream.getReader() const read = (result) => { // ReadableStreamReadResult<FeedItem> if (result.done) { return } try { console.log(result.value) } catch (err) { reader.cancel(undefined) return } read(await reader.read()) } read(await reader.read())
Fetch chronological thread updates¶
await textile.feed.list("<thread-id>")
Create a thread¶
const thread = await textile.threads.add("Basic")
Add a file¶
await textile.files.add({ latitude: 48.858093, longitude: 2.294694 }, "", "<thread-id>")
Files stored as blobs¶
For threads you create that use the
/blob schema, you need to add your files slightly differently if your code is running in Node.js or running in the browser.
const data = new Blob(["mmm, bytes..."]) await textile.files.add( data, "", "<thread-id>" )
const block = await textile.files.add("mmm, bytes...", "", "<thread-id>")
Get a file¶
In this example, we get a file stored with a
/json based schema.
// fileHash is a string hash, commonly found in block.files[index].file.hash const jsonData = textile.file.content(fileHash)
Files stored as blobs¶
The same as when writing blob data to threads, you need to be aware of differences when reading blob data if you are in the browser versus if you are in Node.js.
// fileHash is a string hash, commonly found in block.files[index].file.hash const content = textile.file.content(fileHash) const reader = new FileReader(); reader.onload = function() { console.log(reader.result); } reader.readAsText(content);
// fileHash is a string hash, commonly found in block.files[index].file.hash const content = textile.file.content(fileHash)
Use with redux-saga¶
Many clients are using libraries such as redux-sagas to manage app state combined with Textile. If you use the above examples, you might hit an issue where you need to declare the context of a function your are calling (detailed in this ticket). Here are a couple examples using Textile to get you started.
Create a thread¶
export function * createThread(name, key) { const thread = yield call([textile.threads, "add"], name, key) return thread }
Get file content¶
export function * getFile(hash) { const file = yield call([textile.file, "content"], hash) return file }
Live playground¶
You can start playing with some of the core functionality in our developer playground. The playground provides a number of examples that you can run directly against a local Textile node (deamon or desktop installation).
API Documentation¶
The full API documentation for Textile's JavaScript HTTP Client (js-http-client) can be found at.
Feel free to join the Textile Developer Slack and let us know what you are building. People are always excited to share and learn about new ideas. | https://docs.textile.io/develop/clients/javascript/ | 2019-08-17T13:36:20 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.textile.io |
DetachVolume.
Request Parameters
The following parameters are for this specific action. For more information about required and optional parameters that are common to all actions, see Common Query Parameters.
- Device
The device name.
Type: String
Required: No
- DryRun
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is
DryRunOperation. Otherwise, it is
UnauthorizedOperation.
Type: Boolean
Required: No
- Force.
Type: Boolean
Required: No
- InstanceId
The ID of the instance.
Type: String
Required: No
- VolumeId
The ID of the volume.
This example detaches volume
vol-1234567890abcdef0.
Sample Request &VolumeId=vol-1234567890abcdef0 &AUTHPARAMS
Sample Response
<Detach>detaching</status> <attachTime>YYYY-MM-DDTHH:MM:SS.000Z</attachTime> </DetachVolumeResponse>
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DetachVolume.html | 2019-08-17T13:26:24 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.aws.amazon.com |
All content with label as5+cache+expiration+gridfs+gui_demo+infinispan+interface+listener+write_through+xml.
Related Labels:
podcast, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, query, deadlock, intro, archetype, pojo_cache, jbossas, lock_striping, nexus, guide, schema,
s3, amazon, grid, jcache, test, api, xsd, ehcache, maven, documentation, youtube, userguide, write_behind, 缓存, ec2, hibernate, custom_interceptor, clustering, setup, eviction, fine_grained, concurrency, out_of_memory, jboss_cache, index, events, batch, configuration, hash_function, buddy_replication, loader, pojo, cloud, mvcc, notification, tutorial, presentation, jbosscache3x, read_committed, distribution, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, development, websocket, async, transaction, interactive, xaresource, build, searchable, demo, installation, client, migration, non-blocking, jpa, filesystem, tx, user_guide, article, eventing, client_server, infinispan_user_guide, standalone, repeatable_read, webdav, hotrod, snapshot, docs, batching, consistent_hash, store, whitepaper, jta, faq, spring, 2lcache, jsr-107, jgroups, lucene, locking, rest, hot_rod
more »
( - as5, - cache, - expiration, - gridfs, - gui_demo, - infinispan, - interface, - listener, - write_through, - xml )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/as5+cache+expiration+gridfs+gui_demo+infinispan+interface+listener+write_through+xml | 2019-08-17T13:39:02 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.jboss.org |
LibreOffice » readlicense_oo
View module in: cgit Doxygen
Contains the stock libreoffice licensing blurb, as distributed in the install directory, and also potentially at run-time.
Generating licence files ------------------------
License files are generated from a single source file (license/license.xml). Output file formats are plain text and html.
- The plain text and the html format is generated with xslt. There are two separate xsl files for plain text and html.
Conditional text ----------------
The contents of the license file depends on the build configuration. Several externals may or may not be shipped with LibreOffice. Therefore, we need to pass information about build configuration to the xslt processor.
Variables used for conditional text:
- BUILD_TYPE: A space separated list of libraries/externals. If an external is present in that list, then the related license text should be included.
- MPL_SUBSET: If the variable is defined, then GPL and LGPL license text will not be included, because none of the built-in code need it.
- OS: The target platform. E.g. MSVC Runtime is packaged and used only on Windows.
- WITH_THEMES: A space separated list of icon sets that are used in the build.
Conditional text are surrounded by and extra
Generated by Libreoffice CI on lilith.documentfoundation.org
Last updated: 2019-08-11 15:14:38 | Privacy Policy | Impressum (Legal Info) | https://docs.libreoffice.org/readlicense_oo.html | 2019-08-17T12:42:28 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.libreoffice.org |
Best Practices for Applying Service Packs, Hotfixes and Security Patches
By Rick Rosato, Technical Account Manager, Microsoft Corporation
Abstract
This paper recommends a series of best practices for deploying Microsoft service packs, hotfixes and security patches. The information contained in this document has been derived from Microsoft Technical Account Managers, Microsoft Product Support Services (PSS) engineers, and well-known Microsoft subscription products like Microsoft TechNet and Microsoft Developer's Network (MSDN).
On This Page
Introduction
Generic Best Practices
Service Pack Best Practices
Hotfix Best Practices
Security Patches Best Practices
Conclusion
Appendix A - Definitions
Introduction
Service packs, hotfixes and security patches are updates to products to resolve a known issue or workaround.
Moreover, service packs update systems to the most current code base. Being on the current code base is important because that's where Microsoft focuses on fixing problems. For example, any work done on Windows 2000 is targeted at the next service pack and hotfixes are built against the existing available base.
Individual hotfixes and security patches on the other hand should be adopted on a case-by-case, "as-needed" basis. The majority of security updates released are for client side (often browser) issues. They may or may not be relevant to a server installation. Evaluate the update, if it's needed, then apply it. If not, assess the risk of applying or not.
For a full description of service packs, hotfixes and Security Updates, please refer to Appendix A ?Definitions.
The basic rules are:
"The risk of implementing the service pack, hotfix and security patch should ALWAYS be LESS than the risk of not implementing it."
And,
"You should never be worse off by implementing a service pack, hotfix and security patch. If you are unsure, then take steps to ensure that there is no doubt when moving them to production systems."
The following guidelines outline the recommended processes to follow before implementing service packs, hotfixes and security patches. You can follow them as a step-by-step guide to having a successful implementation of any Microsoft recommended update.
Generic Best Practices
These apply to all updates regardless of whether they are service packs, hotfixes or security patches. The generic items listed below are mandatory steps that need to be performed across all updates. In addition, there are specific best practices for each type of update, and these are listed under each update.. If your current procedure is lacking any of the above, please reconsider carefully before using it for deployment of updates.
Read all related documentation.
Before applying any service pack, hotfix or security patch, all relevant documentation should be read and peer reviewed. The peer review process is critical as it mitigates the risk of a single person missing critical and relevant points when evaluating the update.
Reading all associated documentation is the first step in assessing whether:
The update is relevant, and will resolve an existing issue.
Its adoption won't cause other issues resulting in a compromise of the production system.
There are dependencies relating to the update, (i.e. certain features being enabled or disabled for the update to be effective.)
Potential issues will arise from the sequencing of the update, as specific instructions may state or recommend a sequence of events or updates to occur before the service pack, hotfix or security patch is applied.
Documentation released with the updates is usually in the form of web pages, attached Word documents and README.TXT files. These should be printed off and attached to change control procedures as supporting documentation.
As well as the documentation released with the updates, a search on the Premier Web site () for Premier customers or a search on the public Microsoft Support site () needs to be done for any additional post-release information on the update. TechNet also has the "List of Bugs Fixed in <product name> Service Pack <n>" articles. This is a critical document that must be referenced.
Apply updates on a needs only basis..
Especially with security patches, the expectation is that it must be an urgent issue and must be deployed quickly. Without trying to detract from the urgency, security patches are very much a relative update; for example, customers using solely Windows NT4 can ignore a patch for a security vulnerability in Windows 2000. However, if the issue is relevant and does plug a security hole, then it should be evaluated urgently.
Only when it addresses or fixes an issue being experienced by the customer should it be considered. Of course, it still needs to be evaluated before being installed.
Testing.
The prior points really assist in giving you a feel (before installing) for the potential impact, however, testing allows for the "test driving" and eventual signing off of the update.
Service packs and hotfixes must be tested on a representative non-production environment prior to being deployed to production. This will help to gauge the impact of such changes.
Plan to uninstall.
Where possible, service packs, hotfixes and security patches must be installed such that they can be uninstalled, if required.
Historically, service packs have allowed for uninstalling, so verify there is enough free hard disk space to create the uninstall folder.
Consistency across Domain Controllers.
Service packs, hotfixes and security patch levels must be consistent on all Domain Controllers (DCs). Inconsistent update levels across DCs can lead to DC-to-DC synchronisation and replication related problems. It is extremely difficult to trap errors caused by DCs being out of sync, so it's critical that consistency is maintained.
Where it is practical, Member Servers should also be updated with the same service packs and hotfixes as the Domain Controllers.
Have a working Backup and schedule production downtime.
Server outages should be scheduled and a complete set of backup tapes and emergency repair disks should available, in case a restoration is required.
Make sure that you have a working backup of your system. The only supported method of restoring your server to a previous working installation is from a backup. For more information on backup and recovery procedures, please refer to:
" Backup " in the Microsoft Windows 2000 Server Resource Kit Server Operations Guide.
" Repair, Recovery, and Restore " in the Microsoft Windows 2000 Server Resource Kit Server Operations Guide.
Always have a.
Enterprises may need to exercise their back-out plan in the event of the update not having an uninstall process or the uninstall process failing. The back-out plan can be as simple as restoring from tape, or may involve many lengthy manual procedures.
Forewarn helpdesk and key user groups.
You need to notify helpdesk staff and support agencies (such as Microsoft Product Support Service - PSS) of the pending changes so they may be ready for arising issues or outages.
In order to minimize the user impact, it is also a good idea to prepare key user group of proposed updates, this will assist in managing user expectations.
Don't get more than 2 service packs behind.
Schedule periodic service pack upgrades as part of your operations maintenance and try never to be more than two service packs behind.
Target non-critical servers first.
If all tests in the lab environment are successful, start deploying on non-critical servers first, if possible, and then move to the primary servers once the service pack has been in production for 10-14 days.
Service Pack Best Practices
There are great Microsoft TechNet articles that reference Service Pack Best Practices. All you need to know can be found in the documents list below:
Before Installing a Windows NT Service Pack (165418)
Strategies for MS Exchange Service Packs and Version Upgrades.
The Microsoft Windows 2000 Service Pack Installation and Deployment Guide.
Hotfix Best Practices
Service Pack (SP) level consistency.
Don't deploy a hotfix until you have all current service packs installed. A hotfix is related to a service pack and should be deployed with this in mind. If a hotfix is for post-Windows 2000 SP2, for example, then you need to ensure that the server has SP2 installed.
Latest SP instead of multiple hotfixes.
If multiple hotfixes are to be applied and these are in the latest released service pack, apply the latest service pack instead of applying several hofixes, unless issues relating to the latest service pack may cause the server to break.
Security Patches Best Practices
Apply admin patches to install build areas.
It is crucial that not only clients are retrospectively updated with security patches, but the client built areas are updated for any new clients. Admin patches are required and differ from the client patch. They need to be applied to client build areas. The admin patches are usually located in a different location to the client-side patches. For example, the admin Office Security patches are available from the Office Resource Kit Toolbox at:.
Apply only on exact match.
Apply these fixes only if you encounter exactly the issue the fix solves or if the circumstances relate to your environment.
Subscribe to the notification alias to receive proactive emails on the latest security patches.
Conclusion
It is critical that when service packs, hotfixes, and security patches are required to be installed, that these best practices be followed. They will guide you through the successful deployment of an update, and will assist your enterprise's recovery should the update fail.
Appendix A - Definitions
Service packs
Service packs correct known problems and provide tools, drivers, and updates that extend product functionality, including enhancements developed after the product released. They get you up to our current code base. Being on the current code base is important because that's where we fix the code.
Service packs keep the product current, and extend and update your computer's functionality. Service packs include updates, system administration tools, drivers, and additional components. All are conveniently bundled for easy downloading.
Service packs are product specific, so there are separate ones for each product. Being product specific does not however, mean that they are SKU (Stock-Keeping Unit) specific. For example, Windows NT 4.0 Server and Workstation will use the same service pack.
Service packs are cumulative - each new service pack contains all the fixes in previous service packs, as well as any new fixes. You do not need to install a previous service pack before you install the latest one. For example, Service Pack 6a contains all the fixes in SPs 1, 2, 3, 4, 5 and 6.
Hotfixes or QFE's
QFE (Quick Fix Engineering) is a group within Microsoft that produces "hotfixes" - code patches for products that are provided to individual customers when they experience critical problems for which no feasible workaround is available.
Hotfixes are not intended for general installation, since they do not undergo extensive beta testing when they are created. Microsoft targets hotfix support toward enterprise-level customers and designs it to provide an extra level of security for mission-critical software systems.
Groups of "hotfixes" are periodically incorporated into service packs that undergo more rigorous testing and are then made generally available to other customers.
They are not regression tested. Hotfixes are very specific - you should apply one only if you experience the exact problem they address and are using the current software version with the latest service pack.
General criteria to meet in order for an issue to be evaluated for a potential bug fix are:
Excessive loss of work or revenue to the customer
No reasonable, Customer accepted, workaround exists
Priority given to Premier accounts
Microsoft offers full n-1 QFE support. This means that we will support the current shipping version of a product and its predecessor. "Version" is defined as a new release with some added functionality and does not include "A" level releases, which are considered strictly maintenance releases. For example, if version 3.5 is superseded by version 3.5A, bug fixes will not be done on 3.5 since 3.5A is a maintenance release. On the other hand, if version 4.21 is superseded by version 6.0 (a functionality release), bug fixes will continue to be done against 4.21 until another version ships. If there is not an update to a product, (ex: LAN Manager) then we will continue to do bug fixes until such point when volume has significantly slowed. We will apprise our Customers at least 6 months in advance of us discontinuing QFE support**.**
Security Patches
Security patches eliminate security vulnerabilities. Attackers wanting break into systems can exploit these vulnerabilities. These are analogous to hotfixes but are deemed mandatory if the circumstances match and need to be deployed quickly.
The majority of security updates released are for client side (often browser) issues. They may or may not be relevant to a server installation. You need to obtain both the admin patch and the client patch as the client patch will retroactively update your client base and the admin patch will update your client build area on the server.
Great articles on security patches are available at:
As mentioned above, the admin patches are located in a different location to the client-side patches. For example, the admin Office Security patches are available from the Office Resource Kit Toolbox at.
You also need to be aware that Microsoft makes available four primary areas to obtain client software security patches for its products. This means that if users have access to the Internet, they may have updated their PCs using one of the four mechanisms.
Windows Update. It uses ActiveX technology to scan the PC to see what has been installed and presents a list of suggested components that need upgrading based on the most up-to-date and accurate versions.
Recent security bulletins can be found at. This is the best place to search for purely security-related patches, especially since it was upgraded to allow searching by product or date.
You can also visit product-specific security patch download pages. They are available for Internet Explorer (IE) and Office Updates. The IE download page is a simple, chronological list of security patches that makes it easy to click down the list and get what you need. Unlike WU, however, there is no facility for identifying which patches have already been installed.
Finally, search the Microsoft Download Center . | https://docs.microsoft.com/en-us/previous-versions/tn-archive/cc750077(v%3Dtechnet.10) | 2019-08-17T13:29:25 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.microsoft.com |
Queryable.Sum<TSource> Method (IQueryable<TSource>, Expression<Func<TSource, Nullable<Int64>>>)
Microsoft Silverlight will reach end of support after October 2021. Learn more.
Namespace: System.Linq
Assembly: System.Core (in System.Core.dll)
Syntax
'Declaration <ExtensionAttribute> _ Public Shared Function Sum(Of TSource) ( _ source As IQueryable(Of TSource), _ selector As Expression(Of Func(Of TSource, Nullable(Of Long))) _ ) As Nullable(Of Long)
public static Nullable<long> Sum<TSource>( this IQueryable<TSource> source, Expression<Func<TSource, Nullable<long>>> selector ) Value
Type: System.Nullable<Int64>
The sum of the projected values..
Examples
The following code example demonstrates how to use Sum<TSource>(IQueryable<TSource>, Expression<Func<TSource, Double>>) to sum the projected values of a sequence.
Structure Package Public Company As String Public Weight As Double End Structure Shared Sub SumEx3()}}) ' Calculate the sum of all package weights. Dim totalWeight As Double = packages.AsQueryable().Sum(Function(pkg) pkg.Weight) outputBlock.Text &= "The total weight of the packages is: " & totalWeight & vbCrLf End Sub 'This code produces the following output: 'The total weight of the packages is: 83.7); outputBlock.Text += String.Format("The total weight of the packages is: {0}", totalWeight) + "\n"; } /* This code produces the following output: The total weight of the packages is: 83.7 */
Version Information
Silverlight
Supported in: 5, 4, 3
Silverlight for Windows Phone
Supported in: Windows Phone OS 7.1
Platforms
For a list of the operating systems and browsers that are supported by Silverlight, see Supported Operating Systems and Browsers.
See Also | https://docs.microsoft.com/en-us/previous-versions/windows/silverlight/dotnet-windows-silverlight/bb534866%28v%3Dvs.95%29 | 2019-08-17T14:35:12 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.microsoft.com |
RaviIndicator
RaviIndicator (range action verification index) identifies if a values pair is trending.
To set up the indicator you can set its CategoryBinding, ValueBinding and ItemsSource properties. Additionally, you can control the periods (in days) over which the indicator will be applied. To do this set the LongPeriod and ShortPeriod properties.
Example 1: RaviIndicator
<telerik:RadCartesianChart.Indicators> <telerik:RaviIndicator </telerik:RadCartesianChart.Indicators>
Figure 1: Ravi. | https://docs.telerik.com/devtools/silverlight/controls/radchartview/indicators/indicators-raviindicator | 2019-08-17T13:24:52 | CC-MAIN-2019-35 | 1566027313259.30 | [array(['images/radchartview-indicators-raviindicator-0.png', None],
dtype=object) ] | docs.telerik.com |
Rocky Series (5.1.0 - 5.1.x) Release Notes¶
5.1.4¶
5.1.2¶
5.1.0¶
New Features¶
By adding extra variable
-e ipa_upstream_release=stable-mitakafor instance, the deployment can now use all ramdisk and kernel images available in instead of the default
master.
Furthermore, as some of these files do not have any .sha256 checksum associated to them, the downloading of these file is now just issuing a “warning” and is not reported as an Ansible error in the final summary.
Custom partitioning YAML file can now be specified using partitioning_file variable which contains a path to the YAML file describing the partitions layout. For example:
-: home size: 1G mkfs: type: xfs mount: mount_point: /home fstab: options: "rw,nodev,relatime"
For more informations please refer to the following links: Disk Image Layout Section Standard Partitioning LVM Partitioning
Allow to populate the NTP servers setting of dnsmasq. This is optional, but if
dnsmasq_ntp_servers``setting is set, it adds a ``dhcp-option=42,dnsmasq_ntp_serversto the generated dnsmasq configuration for bifrost.
Stores introspection data in nginx.
In the absence of swift, we can now use the bifrost nginx web server - masquerading as an object store - to store raw and processed introspection data for nodes. This is configured via the boolean variable
inspector_store_data_in_nginxand is enabled by default. | https://docs.openstack.org/releasenotes/bifrost/rocky.html | 2019-08-17T12:43:33 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.openstack.org |
Rocky Series Release Notes¶
3.1.1-6¶
Upgrade Notes¶
To enable UDP listener monitoring when no pool is attached, the amphora image needs to be updated and load balancers with UDP listeners need to be failed over to the new image.
3.1.1¶
Bug Fixes¶
Fixed duplicated IPv6 addresses in Active/Standby mode in CentOS amphorae.
Fixed an issue where the listener API would accept null/None values for fields that must have a valid value, such as connection-limit. Now when a PUT call is made to one of these fields with null as the value the API will reset the field value to the field default value.
3.1.0¶
Upgrade Notes¶
To fix the issue with active/standby load balancers or single topology load balancers with members on the VIP subnet, you need to update the amphora image.
Critical Issues¶
Fixed a bug where active/standby load balancers and single topology load balancers with members on the VIP subnet may fail. An updated image is required to fix this bug.
Security Issues¶
As a followup to the fix that resolved CVE-2018-16856, Octavia will now encrypt certificates and keys used for secure communication with amphorae, in its internal workflows. Octavia used to exclude debug-level log prints for specific tasks and flows that were explicitly specified by name, a method that is susceptive to code changes.
Bug Fixes¶
Fixed an issue creating members on networks with IPv6 subnets.
Fixes creating a fully populated load balancer with not REDIRECT_POOL type L7 policy and default_pool field.
Fixed a performance issue where the Housekeeping service could significantly and incrementally utilize CPU as more amphorae and load balancers are created and/or marked as DELETED.
Fix load balancers that could not be failed over when in ERROR provisioning status.
Fixed a bug that caused an excessive number of RabbitMQ connections to be opened.
Fixed an error when plugging the VIP on CentOS-based amphorae.
Fixed an issue where trying to set a QoS policy on a VIP while the QoS extension is disabled would bring the load balancer to ERROR. Should the QoS extension be disabled, the API will now return HTTP 400 to the user.
Fixed an issue where setting a QoS policy on the VIP would bring the load balancer to ERROR when the QoS extension is enabled.
Octavia will no longer automatically revoke access to secrets whenever load balancing resources no longer require access to them. This may be added in the future.
Other Notes¶
Added a new option named server_certs_key_passphrase under the certificates section. The default value gets copied from an environment variable named TLS_PASS_AMPS_DEFAULT. In a case where TLS_PASS_AMPS_DEFAULT is not set, and the operator did not fill any other value directly, ‘insecure-key-do-not-use-this-key’ will be used.
3.0.2¶
Upgrade Notes¶
To resolve the IPv6 VIP issues on active/standby load balancers you need to build a new amphora image.
Security Issues¶
Fixed a debug level logging of Amphora certificates for flows such as ‘octavia-create-amp-for-lb-subflow-octavia-generate-serverpem’ (triggered with loadbalancer failover) and ‘octavia-create-amp-for-lb-subflow-octavia-update-cert-expiration’.
Bug Fixes¶
Fixes issues using IPv6 VIP addresses with load balancers configured for active/standby topology. This fix requires a new amphora image to be built.
Add new parameters to specify the number of threads for updating amphora health and stats.
This will automatically nova delete zombie amphora when they are detected by Octavia. Zombie amphorae are amphorae which report health messages but appear DELETED in Octavia’s database.
3.0.1¶
3.0.0¶
New Features¶
Added UDP protocol support to listeners and pools.
Adds a health monitor type of UDP-CONNECT that does a basic UDP port connect.
Listeners have four new timeout settings:
timeout_client_data: Frontend client inactivity timeout
timeout_member_connect: Backend member connection timeout
timeout_member_data: Backend member inactivity timeout
timeout_tcp_inspect: Time to wait for TCP packets for content inspection
The value for all of these fields is expected to be in milliseconds.
Members have a new boolean option backup. When set to true, the member will not receive traffic until all non-backup members are offline. Once all non-backup members are offline, traffic will begin balancing between the backup members.
Added ability for Octavia to automatically set Barbican ACLs on behalf of the user. Such enables users to create TLS-terminated listeners without having to add the Octavia keystone user id to the ACL list. Octavia will also automatically revoke access to secrets whenever load balancing resources no longer require access to them.
Add sos element to amphora images (Red Hat family only).
Adding support for the listener X-Forwarded-Proto header insertion.
Octavia now supports provider drivers. This allows third party load balancing drivers to be integrated with the Octavia v2 API. Users select the “provider” for a load balancer at creation time.
There is now an API available to list enabled provider drivers.
Cloud deployers can set api_settings.allow_ping_health_monitors = False in octavia.conf to disable the ability to create PING health monitors.
The new option [haproxy_amphora]/connection_logging will disable logging of connection data if set to False which can improve performance of the load balancer and might aid compliance.
You can now update the running configuration of the Octavia control plane processes by sending the parent process a “HUP” signal. Note: The configuration item must support mutation.
Amphora API now returns the field image_id which is the ID of the glance image used to boot the amphora.
Known Issues¶
You cannot mix IPv4 UDP listeners with IPv6 members at this time. This is being tracked with this story
Upgrade Notes¶
UDP protocol support requires an update to the amphora image to support UDP protocol statistics reporting and UDP-CONNECT health monitoring.
Two new options are included with provider driver support. The enabled_provider_drivers option defaults to “amphora, octavia” to support existing Octavia load balancers. The default_provider_driver option defaults to “amphora” for all new load balancers that do not specify a provider at creation time. These defaults should cover most existing deployments.
The provider driver support requires a database migration and follows Octavia standard rolling upgrade procedures; database migration followed by rolling control plane upgrades. Existing load balancers with no provider specified will be assigned “amphora” as part of the database migration.
The fix for the hmac.compare_digest on python3 requires you to upgrade your health managers before updating the amphora image. The health manager is compatible with older amphora images, but older controllers will reject the health heartbeats from images with this fix.
Deprecation Notes¶
The quota objects named health_monitor and load_balancer have been renamed to healthmonitor and loadbalancer, respectively. The old names are deprecated, and will be removed in the T cycle.
The Octavia API handlers are now deprecated and replaced by the new provider driver support. Octavia API handlers will remain in the code to support the Octavia v1 API (used for neutron-lbaas).
Provider of “octavia” has been deprecated in favor of “amphora” to clarify the provider driver supporting the load balancer.
Finally completely the remove user_group option, as it was deprecated in Pike.
Security Issues¶
Disabling connection logging might make it more difficult to audit systems for unauthorized access, from which IPs it originated, and which assets were compromised.
Adds a configuration option, “reserved_ips” that allows the operator to block addresses from being used in load balancer members. The default setting blocks the nova metadata service address.
Bug Fixes¶
Fixes the v2 API returning “DELETED” records until the amphora_expiry_age timeout expired. The API will now immediately return a 404 HTTP status code when deleted objects are requested. The API version has been raised to v2.1 to reflect this change.
Fixes an issue where if more than one amphora fails at the same time, failover might not fully complete, leaving the load balancer in ERROR.
Fixes an issue where VIP return traffic was always routed, if a gateway was defined, through the gateway address even if it was local traffic.
Fixes a bug where unspecified or unlimited listener connection limit settings would lead to a 2000 connection limit when using the amphora/octavia driver. This was the compiled in connection limit in some HAproxy packages.
Fixes an issue with hmac.compare_digest on python3 that could cause health manager “calculated hmac not equal to msg hmac” errors.
Creating a member on a pool with no healthmonitor would sometimes briefly update their operating status from NO_MONITOR to OFFLINE and back to NO_MONITOR during the provisioning sequence. This flapping will no longer occur.
Members that are disabled via admin_state_up=False are now rendered in the HAProxy configuration on the amphora as disabled. Previously they were not rendered at all. This means that disabled members will now appear in health messages, and will properly change status to OFFLINE.
Fixes a neutron-lbaas LBaaS v2 API compatibility issue when requesting a load balancer status tree via ‘/statuses’.
Other Notes¶
Health monitors of type UDP-CONNECT may not work correctly if ICMP unreachable is not enabled on the member server or is blocked by a security rule. A member server may be marked as operating status ONLINE when it is actually down.
A provider driver developer guide has been added to the documentation to aid driver providers.
An operator documentation page has been added to list known Octavia provider drivers and provide links to those drivers. Non-reference drivers, drivers other than the “amphora” driver, will be outside of the octavia code repository but are dynamically loadable via a well defined interface described in the provider driver developers guide.
Installed drivers need to be enabled for use in the Octavia configuration file once you are ready to expose the driver to users.
As part of GDPR compliance, connection logs might be considered personal data and might need to follow specific data retention policies. Disabling connection logging might aid in making Octavia compliant by preventing the output of such data. As always, consult with an expert on compliance prior to making changes. | https://docs.openstack.org/releasenotes/octavia/rocky.html | 2019-08-17T12:39:04 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.openstack.org |
Contents HR Service Delivery Previous Topic Next Topic Manage HR roles Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Manage. When impersonating a user with a scoped HR role, an admin or any HR scoped user cannot access features granted by that role. HR cases and profile information cannot be accessed. Only users with the scoped HR Administrator [sn_hr_core.admin] can see case details when impersonating other scoped HR users. Also, admin cannot change the password of any user with a scoped HR role. For more information on impersonating a user, see Impersonate a user . HR Performance Analytics To configure the Performance Analytics (PA) dashboard, assign the Performance Analytics Administrator [pa_admin] role to the HR Administrator [sn_hr_core.admin] role.Note: Only the System Administrator [admin] can assign PA roles to employees. cannot access HR Service Delivery or the Employee Service Center.]. Log out and log back in to ensure that the changes take effect. Ensure that you have at least two users with the HR Administrator role. If you assign only one person with the role and that person is deactivated, you no longer have a user that can perform the HR admin duties.Note: Ensure that you have completed setup before removing the HR Administrator role.Note: Scheduled jobs that require the Admin role do not run. But, all HR scheduled jobs should run after the Admin role is removed..Add. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/london-hr-service-delivery/page/product/human-resources/concept/c_ManageRoles.html | 2019-08-17T13:19:40 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.servicenow.com |
License Agreement
Unless otherwise noted, all ASK Software products are covered by the ASK Software End User License Agreement. The license agreement should be read in conjunction with the ASK Software Privacy Policy.
Additional Information
Atlassian handles all ASK Software licenses and billing. You can find licenses, quotes, invoices and billing details on your My Atlassian account.
- Atlassian Marketplace FAQ for full details on add-on licensing, support, and upgrade information.
- How can I request a quote?
- What happens when the included maintenance ends?
- Licenses for staging/development/test systems are available at no charge via your my.atlassian.com account
Licenses are installed after the app is installed using the Atlassian Universal Plugin Manager (UPM).
If your evaluation license is going to expire before the purchasing process is complete go to the Marketplace entry for the add-on and press Free 30 Day Trial to get another evaluation license. You can then install the new evaluation license on your instance via Manage Add-ons in Jira.
Community/Open Source Software (OSS) licenses for apps are available free by emailing [email protected] and requesting a Community/OSS license.
If you have any other licensing problems please contact Atlassian via email at [email protected]. | https://docs.asksoftware.eu/license-agreement/ | 2019-08-17T12:39:04 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.asksoftware.eu |
Session
Security
Session Token Cache Key Security
Session Token Cache Key Security
Session Token Cache Key Security
Class
Token Cache Key
Definition
Represents the key for an entry in a SessionSecurityTokenCache.
public ref class SessionSecurityTokenCacheKey
public class SessionSecurityTokenCacheKey
type SessionSecurityTokenCacheKey = class
Public Class SessionSecurityTokenCacheKey
- Inheritance
-
Remarks
When caching a SessionSecurityToken there are two indexes required. One is the context ID, represented by the SessionSecurityToken.ContextId property, that is unique across all session tokens. The other is the key generation, represented by the SessionSecurityToken.KeyGeneration property, which is unique within a session token. When a session token is issued it has only a context ID. When the session token is renewed, the key generation is added. After renewal, the renewed session token is uniquely identifiable via the context ID and key generation.
Objects of type SessionSecurityTokenCacheKey are used as the indexes to the session token cache. An index will always have a valid ContextId property specified, but the KeyGeneration property may be
null, depending on whether the token has been renewed. There is also an optional EndpointId which gives the endpoint to which the token is scoped. | https://docs.microsoft.com/en-us/dotnet/api/system.identitymodel.tokens.sessionsecuritytokencachekey?view=netframework-4.8 | 2019-08-17T13:14:22 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.microsoft.com |
Member
Info.
Member Metadata Token Info.
Member Metadata Token Info.
Member Metadata Token Info.
Property
Metadata Token
Definition
Gets a value that identifies a metadata element.
public: virtual property int MetadataToken { int get(); };
public virtual int MetadataToken { get; }
member this.MetadataToken : int
Public Overridable ReadOnly Property MetadataToken As Integer
Property Value
A value which, in combination with Module, uniquely identifies a metadata element.
Exceptions
The current MemberInfo represents an array method, such as
Address, on an array type whose element type is a dynamic type that has not been completed. To get a metadata token in this case, pass the MemberInfo object to the GetMethodToken(MethodInfo) method; or use the GetArrayMethodToken(Type, String, CallingConventions, Type, Type[]) method to get the token directly, instead of using the GetArrayMethod(Type, String, CallingConventions, Type, Type[]) method to get a MethodInfo first.
Remarks
The tokens obtained using this property can be passed to the unmanaged reflection API. For more information, please see Unmanaged Reflection API.
Note
Using the unmanaged reflection API. | https://docs.microsoft.com/en-us/dotnet/api/system.reflection.memberinfo.metadatatoken?view=netstandard-2.1 | 2019-08-17T14:10:03 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.microsoft.com |
How to: Set the Properties of a Package
This procedure describes how to configure package properties by using the Properties window.
Note
The package must be opened in SSIS Designer before the Properties window lists the properties you set to configure a package.
To set package properties
In Business Intelligence Development Studio,.
See Also | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms137759%28v%3Dsql.105%29 | 2019-08-17T13:37:21 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.microsoft.com |
Automation
Properties.
Automation Automation IdProperty Properties.
Automation Automation IdProperty Properties.
Automation Automation IdProperty Properties.
Property
Automation IdProperty
Definition
Identifies the AutomationProperties.AutomationId attached property, which is a string containing the UI Automation identifier (ID) for the automation element.
public : static DependencyProperty AutomationIdProperty { get; }
static DependencyProperty AutomationIdProperty();
public static DependencyProperty AutomationIdProperty { get; }
Public Shared ReadOnly Property AutomationIdProperty As DependencyProperty
Property Value
The identifier for the AutomationProperties.AutomationId attached property.
Remarks
When it is available, the AutomationId of an element must be the same in any instance of the application, regardless of the local language. The value should be unique among sibling elements, but not necessarily unique across the entire desktop. For example, multiple instances of an application, or multiple folder views in Windows Explorer, can contain elements with the same AutomationId property, such as "SystemMenuBar".
Although support for AutomationId is always recommended for better automated testing support, this property is not mandatory. Where it is supported, AutomationId is useful for creating a test automation script that runs regardless of the UI language. Clients should make no assumptions regarding the AutomationId values exposed by other applications. AutomationId is not guaranteed to be stable across different releases or builds of an application.
Feedback | https://docs.microsoft.com/en-us/uwp/api/windows.ui.xaml.automation.automationproperties.automationidproperty | 2019-08-17T14:03:04 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.microsoft.com |
Administrator Overrides
Windows has a single default access control list (ACL) on all power policy objects. The ACL grants read, write, and change permissions to members of the Authenticated Users group. All other users are granted read-only permission. Applications that call power policy functions can determine whether a user has permissions for a specified power setting by using the PowerSettingAccessCheck function.
Power settings may be overridden via Group Policy. Overrides can be set through domain group policy or local group policy. PowerSettingAccessCheck will report if a specified power setting has a group policy override.
The command line tool PowerCfg.exe displays an error message when a command cannot be completed either because the user did not have permissions to change the power setting or because the power setting has a group policy override.
The Power Options application in Control Panel does not provide support for configuring power policy access permissions. Changing permissions must be done via PowerCfg.exe. | https://docs.microsoft.com/en-us/windows/win32/power/administrator-overrides | 2019-08-17T13:25:57 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.microsoft.com |
UIElement
Collection
UIElement Collection
UIElement Collection
UIElement Collection
Class
Definition
public : sealed class UIElementCollection
struct winrt::Windows::UI::Xaml::Controls::UIElementCollection
public sealed class UIElementCollection
Public NotInheritable Class UIElementCollection
<panelobject> oneOrMoreChildren </panelobject>
- Attributes
-
Windows 10 requirements
Remarks
A UIElementCollection is the type of object that you get from the Children property of a Panel. For example, if you get a value from Grid.Children, that value is a UIElementCollection instance. All the properties that use a UIElementCollection in the Windows Runtime API are read-only properties, where the property is initialized with zero items when an object is first instantiated. But you can then add, remove or query items in the collection at run time, using the UIElementCollection properties and methods.
The type of the items in the UIElementCollection is constrained as UIElement. But UIElement is a base element class in Windows Runtime using XAML, so there are hundreds of element types that can be treated as a UIElement and can thus be one of the items in a UIElementCollection.
Enumerating the collection in C# or Microsoft Visual Basic
A UIElementCollection is enumerable, so you can use language-specific syntax such as foreach in C# to enumerate the items in the UIElementCollection. The compiler does the type-casting for you and you won't need to cast to
IEnumerable<UIElement> explicitly. If you do need to cast explicitly, for example if you want to call GetEnumerator, cast to IEnumerable
Properties
Methods
See also
フィードバック
フィードバックを読み込んでいます... | https://docs.microsoft.com/ja-jp/uwp/api/windows.ui.xaml.controls.uielementcollection | 2019-08-17T13:58:07 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.microsoft.com |
Features and restrictions by context (HTML)
[ This article is for Windows 8.x and Windows Phone 8.x developers writing Windows Runtime apps. If you’re developing for Windows 10, see the latest documentation ]
Learn about the features available to pages in the local and web contexts..
General features and restrictions
This table describes some of the features and restrictions that are available depending on whether the page is running in the local or web context.
Windows Library for JavaScript in the web context
Although you can use WinJS in the web context, some of its APIs work differently because the web context does not have access the Windows Runtime. Here are some of the APIs that are affected: | https://docs.microsoft.com/en-us/previous-versions/windows/apps/hh465373%28v%3Dwin.10%29 | 2019-08-17T14:47:10 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.microsoft.com |
When the Splunk Metrics Workspace is deployed on Splunk Enterprise, the Splunk platform sends anonymized usage data to Splunk Inc. ("Splunk") to help improve the Splunk Metrics Workspace in future releases. For information about how to opt in or out, and how the data is collected, stored, and governed, see Share data in Splunk Enterprise.
What data is collected
This documentation applies to the following versions of Splunk® Metrics Workspace: 1.1.6
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/SMW/1.1.6/Use/Sharedata | 2019-08-17T13:17:29 | CC-MAIN-2019-35 | 1566027313259.30 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Splunk Cloud Service Details
Splunk Cloud delivers the benefits of award-winning Splunk® Enterprise as a cloud-based service. Using Splunk Cloud, service uniformly, so all customers of Splunk Cloud receive the most current features and functionality.
Subscription pricing for Splunk Cloud is based on the volume of uncompressed data that you want to index on a daily basis. The subscription pricing also includes access to Splunk support and a fixed amount of data storage. You can optionally add subscriptions for additional storage capacity to store more data, encryption service to maintain privacy of data at rest, HIPAA or PCI cloud environment to assist you with meeting your compliance needs, and add new use cases for Splunk Cloud with Splunk premium solutions such Enterprise Security and IT Service Intelligence.
Splunk Cloud is available in the following global regions:
- US (Oregon, Virginia and GovCloud)
- EU (Dublin, Frankfurt, London)
- Asia Pacific (Singapore, Sydney, Tokyo)
- Canada (Central)
Ensure Operational Contacts listed in your Splunk.com support portal are regularly updated. Operational Contacts are notified when your Splunk Cloud environment undergoes maintenance, requires configuration awareness, or experiences a performance-impacting event.
For commonly asked questions about Managed Splunk Cloud, see the FAQ for Splunk Cloud.
For more information about self-service Splunk Cloud, including free trials, see the Self-service Splunk Cloud FAQ.
For more information about the terms of service, see the Splunk Cloud Terms of Service.
Data collection
Splunk Cloud provides software and APIs that enable you to ingest data from your applications, cloud services, servers, network devices, and sensors into the service. You can send data to Splunk Cloud as follows:
Using Splunk forwarders: There are two types of forwarder software: universal forwarder and heavy forwarder. In most situations, the universal forwarder is the best forwarder for Splunk Cloud subscription includes a deployment server license for centralized configuration management of your Splunk forwarders. You can request the deployment server license from Splunk support. Setup, enablement, transformation, and sending data from forwarders to your Splunk Cloud environment is your responsibility. This means you are responsible for installing, configuring, and managing your forwarders, including maintaining version compatibility (see Supported Forwarder Versions for details). You are responsible for installing the data collection components of any app you wish to use in Splunk Cloud on a Splunk forwarder.
Splunk Cloud supports scripted and modular inputs either via the Inputs Data Manager (IDM) that is included in your Splunk Cloud subscription or via heavy forwarders that you manage and maintain. When you require an app installed on the IDM, you to open a support ticket and Splunk support will install the app on your behalf.
For more information about Inputs Data Manager, see Features of Splunk Cloud in the Splunk Cloud User manual.
For more information, see Upload Data in the Getting Data In manual.
Using HTTP Event Collector (HEC): HEC lets you send data and application events using a token-based authentication mode to Splunk Cloud over the Secure HTTP (HTTPS) protocol. You can generate a token and then configure a logging library or HTTPS client with the token to send data to HEC in a specific format. HEC is enabled by default for your Splunk Cloud environment with a 1 MB size limit on the maximum content length. You are responsible for setup, enablement, transformation, and sending data to your Splunk Cloud environment via HEC. You are also responsible for monitoring and remediation of any HEC error codes that are received from Splunk Cloud to ensure no interruption of your data ingestion. For more information, see Use the HTTP Event Collector in the Getting Data In manual.
Using AWS Kinesis Data Firehose: AWS Kinesis Data Firehose is a fully managed, scalable, and serverless option for streaming data from various AWS services directly into Splunk Cloud. Setup, enablement, transformation, and sending data to your Splunk Cloud managed 1:8 and 1:12.
Encryption in transit: For security, data in transit is TLS 1.2+ encrypted. Senders and receivers authorize each other, and HTTP-based data collection is secured using token-based authentication.
IP Whitelisting: You can request to restrict data collection from only whitelisted IP addresses by filing a support ticket.
Ingestion
The amount of data that you can collect daily is determined by the Splunk Cloud subscription that you purchase, and you can always choose a higher-level subscription to increase the amount of data that you can collect. You can see current and past daily data ingestion information using the Cloud Monitoring Console (CMC) app that is included with your Splunk Cloud environment. If you consistently exceed your subscription entitlement, contact Splunk Sales to purchase an appropriate plan to handle your volume.
During ingestion, Splunk Cloud indexes incoming data so you can search it. During indexing, data is partitioned into logical indexes, which you can configure to facilitate searching and control users' access to data. Splunk Cloud
For details about limits on data collection, see Splunk Cloud data policies in the Splunk Cloud User Manual.
For best practices for creating indexes, see Manage Splunk Cloud indexes in the Splunk Cloud User Manual.
For service limits relating to indexes, see Splunk Cloud service limits and constraints.
Storage
Storage space in your Splunk Cloud service is based on the volume of uncompressed data that you want to index on a daily basis. Your Splunk Cloud environment comes with sufficient storage to allow you to store up to 90 days of your uncompressed data. For example, if your daily volume of uncompressed data is 100 GB, your Splunk Cloud environment will have 9000 GB (9 TB) of storage. You can optionally purchase additional storage for your Splunk Cloud environment in 500 GB increments. In addition, you can choose to have your data encrypted at rest using AES 256-bit encryption for an additional charge. If you choose encryption at rest, Splunk manages the keys on your behalf.
When you send data to Splunk Cloud, it is stored in indexes and you can self-manage your Splunk Cloud indexes settings using the Indexes page in Splunk Web. Splunk Cloud allows you to specify the maximum age of events in the index (specified in the Retention (days) field) on the Indexes page uses to determine when to delete data. When the index reaches the specified maximum age, the oldest data is deleted.
When the index reaches the specified maximum size or events reach the specified maximum age, the oldest data is deleted. If you require a lower cost option for storage of data beyond 90 days, you can optionally augment Splunk Cloud with Dynamic Data Active Archive (DDAA). As data ages from searchable storage based on your index retention setting, the aged data is automatically moved to DDAA before deletion. Data remains in DDAA until the DDAA retention setting that you specify expires. Your DDAA subscription also entitles you to restore up to 10% of your archive subscription per restore. Note that multiple restores that overlap within a 30 day period will accrue against your restore entitlement.
If you enable Dynamic Data Self-Storage to export your aged ingested data, the oldest data is moved to your Amazon S3 account in the same region as your Splunk Cloud before it is deleted from the index. You are responsible for AWS payments for your use of Amazon S3. When data is deleted from the index, it is no longer searchable by Splunk Cloud.
For more information about export of your aged ingested data, see Store expired Splunk Cloud data.
For more information about archiving your aged ingested data, see Archive expired Splunk Cloud data.
You can review your storage consumption in the Cloud Monitoring Console app included in your Splunk Cloud environment. The app provides information such as the amount of data stored and the number of days of retention for each index.
For more information about managing indexes, see Manage Splunk Cloud indexes in the Splunk Cloud User Manual.
For more information about the Cloud Monitoring Console, see Monitor Splunk Cloud deployment health in the Splunk Cloud User Manual.
Search
Splunk Cloud the form of visualizations, reports, and alerts.
If you enable Dynamic Data Self-Storage to export of your aged ingested data prior to deletion, any data moved from these indexes to your AWS S3 account will no longer be searchable by Splunk Cloud. If you augment Splunk Cloud with Dynamic Data Active Archive (DDAA), restored DDAA data is searchable within 24 hours of it being restored and is searchable for up to 30 days.
To examine data in Splunk Cloud and your on-premises deployment of Splunk Enterprise in a single search, you can configure a Splunk Enterprise search head to connect to a Splunk Cloud indexer cluster. This configuration is called hybrid search. The following conditions and limitations apply to hybrid search:
For more information about hybrid search, see Configure hybrid search in the Splunk Cloud User Manual.
In Splunk Cloud, environment. CMC shows information such as long running searches, skipped scheduled searches, and average search run time.
Splunk Cloud has service limits related to search, such as the maximum number of concurrent searches. This service limit and others are listed in the Splunk Cloud service limits and constraints section.
Splunkbase and private apps
Apps and Add-Ons (apps) include features and functionality ranging from the simplification of data ingest to unique and valuable visualizations. To ensure security and minimize effects on performance, only vetted and compatible apps can run on Splunk Cloud. Note the following:
- Splunkbase is the system of record for app vetting and compatibility with Splunk Cloud.
- Splunk provides support and maintenance for Splunk Supported Apps. In addition, Splunk Cloud ensures compatibility for any installed Splunk Supported Apps before commencing Splunk Cloud upgrades.
- Splunk does not provide support or maintenance for apps published by any third-party developers. For any Developer Supported or Not Supported Apps, you need to ensure compatibility with Splunk Cloud.
- Compatibility of Developer Supported or Not Supported Apps is asserted by the developers of those apps. Splunk does not perform compatibility testing of third-party apps with specific versions of Splunk Cloud.
- Splunk support will not be able to assist in tailoring the Splunkbase apps to your use case. For apps that grant you the license to customize, you will need to perform the customization yourself or through a Splunk Professional Services engagement.
For more information, refer to the Splunkbase app support types here.
Apps that are Splunk Cloud vetted and compatible are listed in either the app browser in Splunk Web or through Splunkbase. Depending on the nature of the Splunkbase apps, you may be able to self-install because they have been marked so, or you may need to open a support ticket to install. When you require an app installed on the Inputs Data Manager, you to open a support ticket and Splunk Support will install the app on your behalf.
Apps you create to support your business needs are called private apps and these apps can also be self-service installed on Splunk Cloud. During the private app installation, Splunk will automatically validate if your app is compatible with Splunk Cloud and allow the installation to complete. If the app is deemed incompatible with Splunk Cloud, you will receive an app vetting report that details areas in your app to remediate to make it compatible with Splunk Cloud and enable it to be self-service installable. Private apps that are developed wholly by you are owned by you and any customization of your private app is outside the scope of the Splunk Cloud subscription.
For more information about Apps, see the following topics in the Splunk Cloud User Manual:
- Install apps in your Splunk Cloud deployment
- Manage private apps in your Splunk Cloud deployment
- Manage a rolling restart in Splunk Cloud
Splunk premium solutions
You can optionally purchase Splunk apps and premium solutions (premium solutions) subscriptions on Splunk Cloud. As part of the subscription, the Splunk Cloud environment is enhanced to support the premium solution. Splunk will install the premium solution on your behalf and will also upgrade the premium solution when a new version is vetted for Splunk Cloud. Multiple premium solution subscriptions can run concurrently on the same Splunk Cloud:
- Splunk Enterprise Security (ES)
- Splunk IT Service Intelligence (ITSI)
- Splunk App for Microsoft Exchange
- Splunk App for PCI Compliance
- Splunk App for VMware
- Splunk Business Flow
Machine Learning Tool Kit (MLTK) is compatible with Splunk Cloud. In addition, Splunk recommends adding the ML-SPL Performance App for the Machine Learning Toolkit to ensure you know the resource utilization impact of MLTK. These steps ensure the MLTK best practices are implemented on Splunk Cloud.
The following premium solutions are compatible with Splunk Cloud but no subscription is available on Splunk Cloud. Installation and configuration of these premium solutions can be done by you or through a Splunk Professional Services engagement. Splunk support will not be able to assist with installation and configuration of the following premium solutions. For more information on these Splunk premium solutions, contact your Splunk sales representative.
- Phantom
- User Behavior Analytics
- VictorOps
Network connectivity and data transfer
You access your Splunk Cloud environment via public endpoints. By default, for both Splunk Web access and sending your data, traffic from your network is encrypted, sent over the public Internet and then routed to your Splunk Cloud environment in an AWS Virtual Private Cloud (VPC). These endpoints are protected using Security Groups and customers can also specify additional access control rules. See the Splunk Cloud service limits and constraints section for the maximum number of customer defined rules.
You can request to restrict data access from only whitelisted IP addresses by filing a support ticket. For any regulated Splunk Cloud environments such as HIPAA and PCI, you must specify at least one address for the IP whitelist.
In addition, forwarders and HTTP Event Collectors compress data when sending over TLS protocol. The amount of compression varies based on the content. For bandwidth planning, assume a compression ratio between 1:8 and 1:12.
If you are using AWS services such as Direct Connect and Kinesis Data Firehose, note the following:
- If you choose to use private connectivity services such as AWS Direct Connect to access Splunk Cloud, you are responsible for setup and configuration of AWS Direct Connect plus any associated fees.
- If you choose to use the Kinesis Data Firehose service for data ingestion, you are responsible for any setup and configuration of AWS Kinesis Data Firehose plus any AWS fees. For more information see Install and configure the Splunk Add-on for Amazon Kinesis Firehose on a managed Splunk Cloud deployment in the Splunk Add-on for Amazon Kinesis Firehose manual.
- If you enable Dynamic Data Self-Storage to export your aged ingested data to your Amazon S3 account in the same region as your Splunk Cloud, you are responsible for any setup, configuration and AWS payments. For more information, refer to the Splunk Cloud User Manual.
- AWS Direct Connect or Kinesis Data Firehose may not be available in all Splunk Cloud regions.
Users and authentication
Splunk Cloud users can do, you assign them roles that have a defined set of specific capabilities, access to indexes, and resource use limits.
Roles give Splunk Cloud users access to features in the service, and permission to perform tasks and searches. Each user account is assigned one or more roles. In addition, your Splunk Cloud environment comes with predefined system roles and system users that are used by Splunk to perform essential monitoring and maintenance activities. You should not delete or modify these system users or roles. For the customer's administrator users, Splunk Cloud provides the sc_admin role, which has the capabilities required to administer Splunk Cloud. You can use the Splunk Cloud sc_admin role for your administrator to perform self-service tasks such as installing apps, creating and managing indexes, and managing users and their passwords. Splunk Cloud does not support direct access to infrastructure, so you do not have command-line access to Splunk Cloud. This means that any supported task that requires command-line access is performed by Splunk on your behalf.
You can configure your user accounts to be authenticated using Lightweight Directory Access Protocol (LDAP) and Active Directory (AD). You can also configure Splunk Cloud to use SAML authentication for single sign-on (SSO). In order to use multifactor authentication for your Splunk Cloud user accounts, you must use a SAML v2 identity provider that supports multifactor authentication. While Splunk Enterprise has built-in support for multifactor authentication such as Duo and RSA, Splunk Cloud does not support these methods of integration.
For more information on User and Roles, see Manage Splunk Cloud users and roles in the Splunk Cloud User Manual.
For more information on Single Sign On, see Configure SAML single sign-on (SSO) to Splunk Cloud in the Splunk Cloud User Manual.
Differences between Splunk Cloud and Splunk Enterprise
Splunk Cloud delivers the benefits of Splunk Enterprise as a cloud-based service. Customers who are familiar with Splunk Enterprise architecture should not make assumptions about the architecture or operational aspects of Splunk software deployed in the Splunk Cloud service. Specifically, Splunk Cloud differs from Splunk Enterprise in the following ways:
Service level
Splunk provides an uptime SLA for Splunk Cloud and will use commercially reasonable efforts to make the Services available. You will receive service credits in the event of SLA failures, as set forth in our current SLA schedule. As Splunk Cloud is offered uniformly across all customers, the SLA cannot be modified on a customer by customer basis.
Splunk Cloud is considered available if you are able to log into your Splunk Cloud Service account and initiate a search using Splunk Software. Splunk continuously monitors the status of each Splunk Cloud environment to ensure the SLA. In addition, Splunk Cloud monitors several additional health and performance variables, including but not limited to the following:
- Ability to log into Splunk Cloud environments. Splunk leverages system users or roles to perform essential monitoring and maintenance activities in managed Splunk Cloud environments. Customers are advised to not delete or edit system users or roles because they are essential to perform monitoring and maintenance activities in managed Splunk Cloud environments.
Splunk Cloud supports scripted and modular inputs either via the Inputs Data Manager (IDM) that is included in your Splunk Cloud subscription or via heavy forwarders that you manage and maintain. Either choice maintains your SLA since no data ingestion is performed on the search tier.
For more information about Splunk Cloud system users, see Manage Splunk Cloud users and roles in the Splunk Cloud User Manual.
For more information about the SLA for Splunk Cloud, see the Splunk Cloud Service Level Schedule.
Maintenance
Splunk Cloud delivers the benefits of award-winning Splunk® Enterprise as a cloud-based service. Splunk manages and updates the Splunk Cloud service uniformly, so all customers of Splunk Cloud receive the most current features and functionality. Ensure Operational Contacts listed in your Splunk.com support portal are regularly updated. Operational Contacts are notified when your Splunk Cloud environment undergoes maintenance, requires configuration awareness, or experiences a performance-impacting event. These contacts will receive regular notifications of planned and unplanned downtime, including scheduled maintenance window alerts and email updates related to incident-triggered cases.
What Splunk does on your behalf:
- Gets you started: When you first subscribe to Splunk Cloud, Splunk sends you a welcome email containing the information required for you to access your Splunk Cloud deployment and get started. This email contains a lot of important details, so keep it handy.
- Assists you with supported tasks: Splunk Cloud enables you to customize user, index and app management via Splunk Web. However, there are features in Splunk Cloud that require assistance from Splunk to activate or make changes to your configurations, such as real-time search and enabling AWS Kinesis Data Firehose data to be received. When you file a support ticket, Splunk will enable such features on your behalf.
- Upgrades and expands your Splunk Cloud: Splunk Cloud adopts the release that has the most benefits for you as quickly as possible. As Splunk releases new versions of Splunk Cloud and Premium Apps, you will be notified by Splunk to schedule the maintenance window. Splunk performs rolling restarts to limit disruption of Search and Indexing during maintenance windows. In certain maintenance situations, data egress of Dynamic Data Self-Storage will be paused. In addition, we will enhance Splunk Cloud on your behalf, such as increasing the amount of your daily ingestion, adding storage, enabling Premium App subscriptions and Encryption at Rest.
- Ensures Splunk Cloud uptime and security: Splunk continuously monitors the status of your Splunk Cloud environment to ensure uptime and availability. We look at various health and performance variables such as the ability to log in, ingest data, access Splunk Web and perform searches. In addition, Splunk Cloud also keeps backups of your ingested data and configurations to ensure data durability. Splunk also employs system user roles with limited privileges to perform tasks on your cloud. If encryption at rest is enabled, we manage your encryption keys.
What you can self-service:
- Customize your Splunk Cloud: Splunk Cloud offers multiple options to ingest your data, so it is your responsibility to ensure the correct data collection method is used for your data sources. For detailed instructions for sending data to your Splunk Cloud deployment, refer to the Getting Data In manual. In addition, Splunk provides a variety of self-service tools to allow you to customize your Splunk Cloud environment, such as user, index and app management. For more information, refer to the Splunk Cloud User Manual.
- Monitor your Splunk Cloud health and usage: You can use the Cloud Monitoring Console (CMC) to holistically monitor the data consumption and health of your Splunk Cloud environment. Your license limits the amount of data per day that you can send to your Splunk Cloud deployment. CMC is designed to help you manage your usage of the service, while all other monitoring is done by Splunk.
Technical support
Splunk Standard Support is included in every Splunk Cloud subscription. For more information regarding Splunk Cloud support terms and program options, refer to:. You should also note the following:
- Splunk Cloud offers multiple options to ingest your data so it is your responsibility to ensure the correct data collection method is configured for your data sources.
- Splunk Cloud enables you to perform user, index and app management via Splunk Web. Any customization of Splunk Cloud vetted and compatible apps is also your responsibility.
- In order to use multifactor authentication for your Splunk Cloud user accounts, you must use a SAML v2 identity provider that supports multifactor authentication. It is your responsibility to ensure your Splunk Cloud vetted and compatible apps.
- There are features in Splunk Cloud that require assistance from Splunk to activate or change your configuration, such as real-time search and enabling AWS Kinesis Data Firehose data to be received. When you file a support ticket, Splunk will enable such features on your behalf.
For more information regarding Admin on Demand Services, refer to the Admin On Demand data sheet and catalog.
For more information regarding data collection, refer to Getting Data In.
For more information regarding performing user, index and app management, refer to the Splunk Cloud User Manual.
Security
The security and privacy of your data is of the utmost importance to you and your organization, and Splunk makes this a top priority. Splunk Cloud service is designed and delivered using key security controls such as:
Instance Security: Every Splunk Cloud service.
Data Encryption: All data in transit to and from Splunk Cloud is TLS 1.2+ encrypted. To encrypt data at rest, you can purchase AES 256-bit encryption for an additional charge. Keys are rotated regularly and monitored continuously.
User Authentication and Access: You can configure authentication using Lightweight Directory Access Protocol (LDAP), Active Directory (AD), and single sign-on using any SAML v2 identity provider. To control what your Splunk Cloud users can do, you assign them roles that have a defined set of specific capabilities. Splunk Cloud.
Data Handling: You can store your data in one of the following Amazon Web Services (AWS) regions:
- US (Oregon, Virginia, GovCloud)
- EU (Dublin, Frankfurt, London)
- Asia Pacific (Singapore, Sydney, Tokyo)
- Canada (Central)
Data is kept in the region you choose. If you need to store your data in more than one region, you can purchase multiple subscriptions. Data is retained in Splunk Cloud according to the volumes, durations, and index configurations you set. Expired data is deleted based on your pre-determined schedule.
For the purposes of disaster recovery, your configuration and recently-ingested data is backed up on a rolling seven-day window. If you unsubscribe, you can take your data with you to an alternative storage container, such as AWS S3 bucket, prior to deletion. Depending on the amount of data and the work involved, we may charge for this service. For more information on Splunk Cloud data management, please review the documentation at Splunk Cloud data policies and Manage Splunk Cloud indexes in the Splunk Cloud User Manual.
Security Controls and Background Screening: Splunk security controls are described in our most recent Service Organization Control II, Type II Report (SOC 2/Type 2 Report). Splunk conducts criminal background checks on its employees prior to hire, as permitted by law.
App Security: All Splunk apps hosted on Splunk Cloud readiness, see the Splunk Developer web page.
For more information about Splunk Data Privacy, Security and Compliance, see Splunk Protects.
Subscription expansions, renewals and terminations
You can expand aspects of your Splunk Cloud subscription anytime during the term of the subscription to meet your business needs. You can:
- increase the amount of your daily ingestion
- increase the amount of storage in 500GB increments
- add Premium App subscriptions
- add Encryption at Rest capabilities
You will receive renewal notifications starting 60 days prior to the end date of your current subscription term. For more information on subscription renewals, contact your Splunk sales representative. If your Splunk Cloud subscription expires, it is considered terminated. The policy for terminated Splunk Cloud or by enabling Dynamic Data Self-Storage to export your aged data to your Amazon S3 account in the same region. Note that Dynamic Data Self-Storage does not export your configuration data. If you choose to use Dynamic Data Self-Storage to export your aged ingested data, you must do so prior to termination of your subscription. You are responsible for AWS charges you incur for your use of Amazon S3.
Compliance and certifications
Splunk has attained a number of compliance attestations and certifications from industry-leading auditors as part of our commitment to adhere to industry standards worldwide. Splunk has attained a number of compliance attestations/certifications to provide customers with independent third-party validation of our efforts to safeguard customer data. Splunk has contracted with industry-leading auditors as part of our commitment to adhere to industry standards worldwide. The following compliance attestations/certifications are available:
- SOC 2 Type II: Splunk Cloud is ISO/IEC 27001:2013-certified. ISO/IEC 27001:2013 is a standard for an information security management system, specifying the policies and procedures for all legal, physical, and technical controls used by an organization to minimize risk to information. Splunk's auditors ISO certification can be found here.
If your data must be maintained in a regulated cloud environment to assist you with meeting your compliance needs, Splunk Cloud provides these optional subscriptions.
- GovCloud: AWS GovCloud (US) is an AWS Region designed to address specific regulatory and compliance requirements for U.S. Government agencies. GovCloud is an isolated AWS Region that is specifically designed to allow US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers to run sensitive workloads in the cloud.
- Health Insurance Portability and Accountability Act (HIPAA): Splunk Cloud (HIPAA) is compliant with the HIPAA Security Rule and HITECH Breach Notification Requirements. These regulations establish a standard for the security of any entity that access, processes, transmits, or stores electronic protected health information (ePHI).
- Payment Card Industry Data Security Standard (PCI DSS): Splunk tests Splunk Cloud for compliance with the PCI DSS v3.2 standard. This standard applies to any entity that processes, transmits, or stores payment card data as well as their critical service providers.
More information for regulated cloud environments are listed below.
Service limits and constraints
The following are Splunk Cloud service limits and constraints. You can use this list as guidance to ensure the best Splunk Cloud experience. Keep in mind that some limits depend on configuration, system load, performance, and available resources. Contact Splunk if your requirements are different or exceed what is recommended in this table.
Splunk Cloud service limits and constraints
Enterprise Security service limits and constraints
Business Flow service limits and constraints
Supported Forwarder Versions
The following are the supported forwarder versions for Splunk Cloud. This information is applicable to universal and heavy forwarders that are communicating directly to Splunk Cloud. If you have deployed an intermediate forwarder tier communicating directly to Splunk Cloud, the following information applies to the forwarders in the intermediate tier instead of the forwarders indirectly connected.
Current release
Splunk determines which versions of Splunk Cloud and Premium Apps to make available to Splunk Cloud subscribers. Splunk adopts the release that has the most benefits for customers as quickly as possible. The following are current versions for Splunk Cloud and Premium App subscriptions, as of August 2019.
More information
The following links provide information about the terms and policies that pertain to the Splunk Cloud service:
- Legal: Terms of Service
- Level of service: Splunk Cloud Service Level Schedule
- Technical support: Splunk Cloud Support Terms
- Maintenance: Service Maintenance Policy
- Handling of data: Splunk Cloud Data Policies
- Splunk Data Privacy, Security and Compliance: Splunk Protects
This documentation applies to the following versions of Splunk Cloud™: 7.2.4, 7.2.6
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/SplunkCloud/7.2.6/Service/SplunkCloudservice | 2019-08-17T13:14:58 | CC-MAIN-2019-35 | 1566027313259.30 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
. (Login).
- Setup Scholantis SIS Data Sync for MyEd BC
- Setup Scholantis SIS Data Sync for MapleWood
- Setup Scholantis SIS Data Sync for PowerSchool
- Send Feedback - What information is sent to Scholantis?
Releases & Updates
Primarily aimed at IT administrators, read more about the many things we've added to enhance your website or portal.
Example
Lorem ipsum dolor sit amet, consectetuer adipiscing elit.
Aliquam fermentum vestibulum est. Sed quis tortor. | https://docs.scholantis.com/ | 2016-08-29T17:58:31 | CC-MAIN-2016-36 | 1471982290497.47 | [] | docs.scholantis.com |
Edit Hole Note
Modifies formatting and
note text for a hole note. Overrides settings specified by the hole note
style for the current document.
Access:
Right-click
a hole note and select Edit Hole Note.
Hole
type
Displays
the hole type.
Edit
Field
Edits
the content of the note. Click a button in the Values and Symbols
section to add the corresponding property to the note text. Click
the arrow next to the Insert Symbol button to select and insert
a symbol.
You
can use a combination of text, inserted symbols, and variables to
configure the annotation text of the note. For example:
C <DIST1> X <DIST2>
would result in C 10 X 10
Options
Use
Default sets hole note to the default format.
Tap Drill selects
the Tap Drill hole note type.
Part Units sets the note to use
the measurement units of the model. Check the box to use model units.
Remove the check mark to use the measurement units specified by
the dimension style.
Precision/Tolerance Settings opens
the
Precision
and Tolerance
dialog box so you can add
tolerance information to values included in the note, or override
the default precision settings.
Edit Quantity Note opens the Quantity
Note dialog box. Allows custom configuration of the quantity note
display in the context of a hole note (represented by the Quantity Note
symbol).
Values and Symbols
Adds
values and symbols to the Edit Field. Click a value or symbol to
add it to the hole note. To remove, place the cursor after the value
or symbol in the Edit Window and backspace.
Hole diameter
Hole depth
Counterbore/Spotface diameter
Counterbore/Spotface depth
Countersink hole diameter
Countersink hole angle
Countersink depth is calculated
from model hole parameters. It is not a part modeling parameter.
Quantity note adds text
to indicate quantity. Not displayed if the hole note is used inside
a hole table or if the quantity is less than two.
Selects a symbol from the
list to add to hole note.
Thread designation does
not display if the thread has no thread information in part modeling.
Custom thread designation
Thread pitch
Thread class
Thread depth
Tap drill diameter
Fastener type indicates
the fastener type used to define hole parameters in part modeling.
Does not display if the feature has no fastener information.
Fastener size indicates
the fastener size used to define hole parameters in part modeling.
Does not display if the feature has no fastener information.
Fastener fit represents
the fastener fit class used to define hole parameters in part modeling.
Does not display if the feature has no fastener information. | http://docs.autodesk.com/INVPRO/2010/ENU/Autodesk%20Inventor%202010%20Help/files/WS1a9193826455f5ff5f7e8f11254371fc5-7f6b-reference.htm | 2016-08-29T17:59:28 | CC-MAIN-2016-36 | 1471982290497.47 | [] | docs.autodesk.com |
If you're a space or system administrator, use this guide to learn about granting permissions to people for access to content and administrative features.
You set permissions while managing spaces, sub-spaces, and blogs. Most permissions are scoped to the level of the root space (where you set global permissions) or sub-spaces. Sub-spaces inherit permissions from a containing space, but you can augment or revoke inherited permissions at the sub-space level.
Note: You don't manage permissions for social groups in the same way you do for spaces. Permissions changes you make in the admin console have no effect on content permissions for groups. Where a group allows people to view or create content, they have that permission for all content that the group supports.
Your ability to manage permissions in a way that best suits how people use Clearspace is one of the application's most important features. Clearspace uses your permissions settings everywhere — whenever someone does anything with spaces or content. Spaces themselves are designed to support the idea that you'll want to give particular groups of people particular kinds of access to a space's features. A space is a kind of sandbox that you can manage access to with permissions.
For example, a system administrator could give someone permission to read documents across the entire application. Then the system or space administrator could give that person permission to create new documents only in a particular sub-space (maybe one that corresponds to the department they work in). In another example, in a Marketing Department space everyone might have permissions that allow them to view and post content, but only those product managers with the appropriate permissions will be able to create and approve content — such as brochures and pricing information — that can be accessed by the sales organization.
Note that project permissions are inherited from the space that contains the project. You don't separately manage permissions for a project.
A user type represents a level of knowledge or trust about a person. You probably feel you have more knowledge or trust about someone who has registered to use Clearspace than you do about someone who is using the application anonymously. User types provide a convenient, built-in way to manage a person's access to application features.
Clearspace includes two default user types that you can't delete: Anyone and Registered Users. Once a user registers, you can assign them permissions as a User or as part of a Group. This results in four categories of Clearspace users who can receive global permissions.
Note: Revoking a permission for Anyone — that is, putting a red X in the permission box — revokes that permission for registered users also. That's true whether or not you explicitly put a green check mark in the permission box for registered users.
If your Clearspace instance is using the document sharing service, you might have guest users. You can't configure permissions for guest users. Guest users are those who have access to the service, but not to your Clearspace instance itself. If they've been invited by someone using your Clearspace instance, their names will show up in the admin console's user summary, but their accounts aren't included in your permissions work.
A permission level represents access to a particular set of Clearspace features. Permission levels fall into two kinds: those for administrators and moderators that capture access to a set of features and those for end users that capture access to individual features.
The following briefly describes each of the permission levels. Also, see Feature and UI Access By Permission Type for tables that shows who has access to what.
Essentially, a system admin can do anything they want to. They have full access to every Clearspace feature at every space level. Generally speaking, though, a good best practice is to delegate lower-level administrative and moderation access to other people. For example, a person who uses a particular sub-space regularly is probably a better person to act as that space's administrator, or to moderate content in the space. Delegating frees the system administrator to focus on system issues, rather than on space or content issues.
For a guide to things only a system administrator can do, see the System Administrators' Guide.
A space admin has access to administrative features for the space they've been assigned to administer, along with any sub-spaces beneath it (although their space admin access can be revoked in each of those sub-spaces). A space administrator can create sub-spaces, set content defaults, and set permissions for the space. They can see content that is in a moderator's queue but hasn't been approved yet. They can even designate other space administrators.
You'll find a detailed description of what a space admin can do in Managing Spaces.
User and group admins can create and edit user and group accounts. These are global permission levels that only a system administrator can grant. A user admin can create and edit user accounts, while a group admin can create and edit group accounts. Neither of these permission level grants a person the ability to set permissions — only to manage account information.
Note that if your Clearspace instance is using LDAP or some other data source that is not writable, having separate user and group admins might not be necessary.
Managing Users and Groups provides more information about what user and group admin can do.
Granting moderator permission gives someone two kinds of abilities.
Where you set content moderator permission depends on which content you want moderated. For all, though, you grant content moderator permission in the admin console at Spaces > Permissions > Admins & Moderators.
Note: Access to moderation queues isn't inherited to sub-spaces.
But you'll choose a different scope by selecting the change space link on the Admins & Moderators page.
Note that as a failsafe to ensure that moderated requests always have a place to go, new requests are routed in the following order:
This applies to new requests only. For example, if a request is in the queue when moderators are removed, the requests will remain in the queue until someone approves or rejects them there. Existing requests won't be routed to the next queue up. If there's only one moderator and that person is deleted from the system, then requests currently in the queue will be orphaned even after a new moderator is assigned to that area. If moderation permissions are merely revoked (or un-granted) for someone, then they'll still have access to the requests currently in the queue but won't be able to approve or reject them.
Keep in mind that in order to have moderators approve and reject content in a moderation queue, moderation will need to be enabled for specific content types in the console at Spaces > Settings > Moderation Settings. For more on these, see Managing Spaces. For a detailed look at what content moderators do, see Moderating Content.
For non-administrative people, you can assign fine-grained access to Clearspace features. Features set a the root space level are inherited by sub-spaces.
Note: Space permissions have no effect on permissions in effect in social groups. You can't set these kinds of permissions for social groups.
Permissions you set in a space are inherited by the sub-spaces inside it unless you revoke or grant permissions in those sub-spaces.
Global Space Permissions. A system administrator sets global permissions — whether for all users, specific users, or groups — by setting permissions in the root space. Those permissions are, by default, inherited in all spaces beneath. That applies to both end user permissions — through which people read and create content, for example — and administrative access — such as space administrators, content moderators, and so on.
Sub-Space Permissions. Sub-spaces inherit the permissions of the space that contains them, but a system or space administrator can grant new permissions or revoke permissions according to what's best for the space, effectively overriding inherited permissions.
Note: Permissions are inherited, but other containment-related features are not. For example, searches in a particular space do not return content in its sub-spaces. Also note that access to moderation queues is not inherited; a moderator approves and rejects content only in the area they're assigned to.
When someone accesses content, Clearspace checks permissions as follows:
The following uses snapshots of the admin console permissions pages to show how permission settings are inherited in a space hierarchy. The space hierarchy illustrated here is Root Space > Second-Level Subspace > Third-Level Subspace.
Note: Revoking a permission for Anyone — that is, putting a red X in the permission box — revokes that permission for registered users also. That's true whether or not you explicitly put a green check mark in the permission box for registered users.
You grant permissions — that is, give access to particular people or groups — with the Grant New Permission tab. This works in basically the same way whether you're granting administrative access or access to space features. In both cases, you select the permission you want to grant, enter the name for a user or group account receiving the access, then click Grant New Permission. To grant permissions, in the admin console go to Spaces > Permissions, then select the space you want to grant permission for. Click Admin & Moderator or Space Permissions to reach the page that describes permissions already set for the space.
The following shows the Grant New Permission tab for space features.
People with certain kinds of permissions have certain access to Clearspace features. These features include those in the admin console and those in the end user interface. This lists user interface features and shows which permission level has access to which feature.
The admin console is available to system admins, space admins, and user and group admins. In some cases, features available in the console are also available in the Clearspace end user interface. The following table lists the pages of the admin console, indicating who has access to the page: system admin, space/community admin, user admin, or group admin. Page names are linked to related documentation.
The following sections describes how Clearspace provides access to end user features based on permission levels. Needless to say, whether a given feature — such as the ability to edit a thread — is available to anyone will depend on whether the feature has been enabled for the space's users. These sections focus on typical defaults and illustrate in particular how access is different for administrators and moderators.
Note: Space permissions aren't in effect in social groups.
The following table lists commands for discussions. It shows which commands are available depending on a person's permission level. Some of these are in the Actions list that's displayed when you're viewing the discussion, while others are visible at the bottom of a reply.
The following table lists commands for documents. It shows which commands are available depending on a person's permission level. All of these are in the Actions list that's displayed when you're viewing the document.
A person's access to space-level features is determined by their permissions level. These features are typically available when a person is looking at the space's All Content tab. In addition, a system or space admin has access to the customize link on the Overview tab, through which they can customize the layout of the Overview tab.
When you create a new space Clearspace prompts you to select a default access scheme. Each of these -- inherited, open, restricted, and private -- is a kind of security template that's made up of particular permissions settings. After you've created the space, you can edit permissions however you like, of course; the access schemes are really just to get you started quickly. The following illustrates show what you get for each.
In the inherited scheme, the newly created sub-space inherits permissions from its immediate parent space. By default, as shown here, the green-tinted permissions are granted, while those without a green tint (but with cleared boxes) are not granted.
In the open scheme, access is open for registered users but several of the permissions are explicitly revoked for anonymous users. In particular, these users can see the space, documents, and comments, but they can't contribute by creating content, voting in polls, and so on.
Notice that the ability to create blog posts is revoked even for registered users. That's by design — typically, a blog's role is to convey the thoughts of a particular user or team. Revoking permission here gives you the ability to grant it as needed.
The restricted access scheme is designed to exclude anonymous users but provide access for registered users.
The private scheme revokes permission for everyone accept the system administrator. This scheme is useful if you want to create a space that's unavailable to everyone, with the idea that you're going to explicitly allow access only to certain users or groups. After you create the space, you can grant permissions to those who'll be using it.
The following lists a few common scenarios describing things you might want to do when managing permissions, along with how to make changes in the admin console to support those scenarios.
Clear all permission check boxes for the root space, then set permissions in subspaces as needed. Clearing the boxes sets root permissions to their unset state; at the root, that state is "not permitted." In other words, you don't need to first explicitly "revoke" permissions at the root level (with red X marks), then selectively grant them. (You can easily clear all of the boxes in the Anyone or Registered Users rows by clicking the Remove icon at the far right of the row.)
While elsewhere a cleared box means "inherit from the containing space," at the root there's nothing to inherit from. That means that none of your users will be able to use Clearspace (reading or adding content) until you start granting permission by selecting check boxes with check marks. Again, remember that in every space beneath the root, a cleared check box means that permissions are inherited.
In the following summary of the root space permissions, only three actions are allowed: viewing the space, reading documents and reading comments. Unless you grant further permission here or in communities contained by the root, users — registered or anonymous — will be able to do only those three things.
The following summary of R&D, a space under the root, shows that permission to view the space, read its documents, and read its comments is granted because it is inherited from the root space. No other actions are allowed.
At the root space, in the Anyone row, select the "read"-related permissions (View Space, Read Document, Read Comment) with green check marks. For each of the other actions you want to allow, view the permission summary for the space, then select the check box with a green check mark for Registered Users. (When you grant permission for actions for Anyone users, you don't need to explicitly grant them for Registered Users. Registered Users can do whatever you allow for Anyone users unless you explicitly revoke the permission for Registered Users.)
For example, the following illustration shows the permission summary for R&D, a space contained by the root space. The green tint for the View Space, Read Document and Read Comment actions show that those actions are allowed for all users because their permissions are inherited from a containing space (in this case, the root) where they're explicitly allowed. Cleared check boxes for the other actions indicate that permissions for those actions are also inherited, but the actions aren't allowed because permission for them hasn't been granted. Check boxes with green check marks indicate that permission for those actions is granted and its sub-communities.
Set permissions as needed for other communities, then revoke View Space permission for Anyone users for the space you want to hide. Finally, create a user group whose members are the users that will have permission to view the "hidden" space, then use a check mark in View Space to explicitly allow that user group to see the space.
The following shows the permission summary for HR, a space contained by the root space. Note that permission for the View Space action is revoked for Anyone users, but it's allowed for the hr_workers user group. This means that no one who isn't a member of hr_workers will be able to see the space. What's more, members of the hr_workers group will be able to do all of the things that registered users can otherwise do in other communities because hr_workers group members are registered users, too. Those allowed actions include everything shown with a green tint, where permission is inherited from the containing space.
In other words, there's no need to explicitly "turn on" permission for an action unless that action is otherwise not allowed. For example, if you wanted hr_workers members to be able to create polls in the HR space, you'd need to put a check mark in the Create Poll check box.
Set permissions as needed for other communities, then revoke permissions for the particular user.
In the following summary of permissions for the HR space, Steve is in the hr_workers user group, but his permission for several actions has been revoked. This summary indicates that he can see the HR space (View Space is explicitly checked for hr_workers), he can read comments and documents and rate documents (the green tint indicates "allowed" permission for those actions is inherited), but he can't performed the red-Xed actions (the X revokes permission).
There's no need to revoke his permission for Create Poll, Create Announcement, and Create Image because he never had it: permission for those actions is inherited as "not allowed" (there's no green tint that indicates the action is explicitly allowed in a containing space). Note that you might want to explicitly revoke the user's permission (even if it's currently unnecessary) to ensure they're revoked in the event that you later allow that action in a containing space.
Permissions behavior is different between spaces and some blogs. Permissions for blogs are split between blogs that aren't contained by a space, project, or group and those that are, such as system and personal blogs. Here's how it breaks down:
Note that you can't set "create blog" permission for blogs in spaces, projects, and groups. Instead, whether those contexts have a blog (in other words, whether one is created there) is based on settings someone makes when creating the space, project, or group. Likewise, you can't set "read blog" permission for a space, project, or group -- if there's a blog there, someone with access to that context can read its posts.
Here are a few scenarios that illustrate things you might want to do, and how you can set blog permissions to do them. Note that the settings under the Blog tab in the admin console affect top-level (system and personal) blog only. You can manage settings for another kinds of blog by going to that blog's Manage Blog page.
Set global permissions for blogs at the Blogs > Permissions tab in the admin console.
For a space or project blog, you grant the Create Blog Post permission when settings permissions for the space at Spaces > Permissions > Space Permissions.
For a system or personal blog, go to Blogs > Management > System Blogs and choose System Blogs or Personal Blogs. In the list of blogs, find the one you want to change and click the Edit icon. On the editing page, add the user as an author and click Update Blog.
Admin Console: Blogs > Settings > Global Permissions
Admin Console: Spaces > Permissions > Space Permissions (for the root space) | http://docs.jivesoftware.com/clearspace/2.5.6/ClearspacePermissions.html | 2012-05-24T04:48:12 | crawl-003 | crawl-003-007 | [] | docs.jivesoftware.com |
Caching in Clearspace usually refers to the back-end distributed memory cache technology, backed by Oracle Coherence. In addition, there is a page-level cache web filter, and some render-layer filtering, but this document does not address these.
There are 2 use cases for developers:
Coherence Cache Configuration Documentation
You want to cache objects in order to keep them in memory to avoid making a database read. The Coherence cache is distributed memory cache, which means any server in your cluster will be able to read it from memory. It's significantly faster to use the cache as opposed to a database call. Ideally you want to make all of your persistent database objects cached. If you don't read an object frequently it may not be necessary to make it cacheable. However, experience shows that data has a tendency to become used more frequently over time and most things end up needing to be cached in the end.
Yes, if you'd like to maintain server-side state across a cluster, a cache is the best place to do that. We use that to track Document edit sessions, form tokens, login presence, etc.
This can be a complex debugging process. Ideally, you'd step through the code in a debugger. You could also check for database calls; if they are being called repeatedly you can infer the object isn't being stored. You can access the contents of the cache through the Coherence mbean (see Oracle site) or Coherence command line tools.
You need to implement 2 interfaces: com.jivesoftare.community.cache.Cacheable, and optionally com.tangosol.io.ExternalizableLite. Next, you don't want to hold onto object references of other objects you don't explicitly own. E.g. you don't want to store a User object, you'd want to store the user id as a long. In general try to hold Strings and primitives.
Generally, when you are creating a new database persisted object type you'd want to create a new cache devoted to it. See the above answer about maintaining server state.
NOTE: you don't write any new code to create a new cache, Coherence takes care of that for you if you define the cache in these config files.
Cache implements the Map interface. The common idiom in backing a database call with a cache is to:
public void loadObject(long objectId) {
Object myObject = cache.get(objectId);
if (myObject == null) {
myObject = loadObjectFromDb(objectId);
cache.put(objectId, myObject);
}
return myObject;
}
public void saveObject(Object myObject) {
saveObjectToDb(myObject);
cache.put(myObject.getId(), myObject);
}
Since Clearspace uses Spring to instantiate the managers it's instantiating and injecting the Cache implementations to the managers. See the spring-cacheContext.xml file to see the Clearspace cache definitions.
Yes. Through the cache-config.xml file you deploy with your plugin. See the plugin documentation.
Generally, you want to make defensive copies of objects you retrieve from the cache, i.e. cloning objects before returning them. If you don't you run the risk of corruptiung data in the cache, e.g. if a form fails validation and partially edits the data. Then the data in the cache can get out of synch with the database. Generally you want to write to the cache at the same time you're making the database call, most often in the same update method in the manager.
In Clearspace the pattern is to manage the interactions between the cache and persistence concerns in the manger layer. The Manager implementaitons are where you should be putting cache access logic.
Functionally, there should be no differences. Behind the scenes, you're using different implementations; the single node implementation is optimized so that it doesn't make the network calls to maintain the distributed cache. | http://docs.jivesoftware.com/clearspace/2.5.6/CachingFrequentlyAskedQuestions.html | 2012-05-24T04:41:59 | crawl-003 | crawl-003-007 | [] | docs.jivesoftware.com |
Details
The lcd. object will detect a malfunction (or absence) of the controller/panel that is expected to be connected. If the display is not properly connected, or the lcd. object is not set up property to work with this display, the lcd.error will be set to 1- YES on attempt to enable the display (set lcd.enabled= 1- YES). | http://docs.tibbo.com/taiko/lcd_error.htm | 2012-05-24T06:06:17 | crawl-003 | crawl-003-007 | [] | docs.tibbo.com |
Index SNMP events with Splunk
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Contents
Index SNMP events with Splunk
The most effective way to index SNMP events is to use
snmptrapd to write them to a FIFO.
First, configure snmptrapd to write to a FIFO rather than to a file on disk.
# mkfifo /var/run/snmp-fifo # snmptrapd -o /var/run/snmp-fifo
Then, configure the Splunk Server to add the FIFO as a data input.
External Links
This documentation applies to the following versions of Splunk: 3.1.2 , 3.1.3 , 3.1.4 View the Article History for its revisions. | http://docs.splunk.com/Documentation/Splunk/3.1.2/Admin/IndexSNMPEventsWithSplunk | 2012-05-24T10:18:48 | crawl-003 | crawl-003-007 | [] | docs.splunk.com |
Serial Upgrade Completed
Description
Details
After the serial upgrade the DS is not able to reboot by itself. You need to power it off and back on again to test the newly uploaded firmware. Since the new firmware can have new or different settings that were not present on a previous one the DS may require initialization (watch out for the error mode after the DS reboots).
Programmer's info:
--- | http://docs.tibbo.com/soism/dsm_msg_serial_upgr_completed.htm | 2012-05-24T03:14:33 | crawl-003 | crawl-003-007 | [] | docs.tibbo.com |
You can build plugins that enhance Clearspace with links to new functionality, additions to the admin console, and additions to the end user UI.
Action plugins incorporate Struts actions, in which a user gesture (such as clicking a link) directs processing to an action class whose code executes, then typically routes processing back to the user interface to display some result.
The simple plugin described here displays a link that, when clicked, displays a new page showing a greeting. There are simpler ways to do this, of course, but hopefully you'll see how much more you can do in your action class and how useful it is to separate that code from your user interface.
In the model-view-controller (MVC) architecture that Clearspace is based on, the idea is to separate code that manages logic around data (the model) from code that presents the data as user interface (the view) from code that controls interactions between the two (the controller).
Action plugins use the same model on a smaller scale. The following illustrates how it works in a plugin:
Here's a brief overview of what happens among the pieces.
The code used in this topic is a simple action that adds a menu item to the Browse menu of the user bar (that row of menus along the top of every Clearspace page). That "Say Hello" menu item will simply navigate to a new page that displays a simple greeting. Here are the pieces described:
Every plugin has one. This file tells Clearspace what's in the plugin — in short, what Clearspace features you're extending — and where to find code and other supporting files your plugin needs (although not necessarily all of them, as you'll see in the next section).
The url element specifies the URL that points to the action you'll create. This is your hook from your plugin's user interface into the action that provides its functionality.
<plugin xmlns:xsi=""
xsi:
<name>SimpleExamples</name>
<description>Simple macro and plugin examples.</description>
<author>Me</author>
<version>1.0.0</version>
<minServerVersion>1.3</minServerVersion>
<!-- The <components> element contains additions or customizations to
components in the user interface. Each <component> child element
represents a different UI piece, with its id attribute value
specifying which piece is being customized.
Here, you're customizing the user bar (the menu bar at the top of
each Clearspace page) so that its Browse menu gets a new entry
item, "Say Hello". The <url> element here specifies the URL that
Clearspace will load when the user clicks the item. In this
case, Clearspace will execute the sayhello action (which is defined
in the struts.xml file).
-->
<components>
<component id="user-bar">
<tab id="jiveUserMenu5">
<item id="sayhello" name="Say Hello">
<url><![CDATA[<@ww.url]]></url>
</item>
</tab>
</component>
</components>
</plugin>
Pretty simple, really. The
<plugin> element's children include information about where the plugin is coming from (you), its version (in case you revise it for upgrade), and so on. The
<minServerVersion> element specifies the minimum Clearspace version that the plugin will run on (Clearspace won't actually deploy it on earlier versions). The code here tells Clearspace to add a new "Say Hello" link to the Browse menu on the user bar. It also says which action (as defined in the Struts file you'll create in a moment) should be executed when the user clicks the link.
A Java action class is the plugin's "model." This is where the plugin gets the data it needs, in this case a simple greeting. A FreeMarker template you create in a moment will display the data.
package com.example.clearspace.plugin.action;
import com.jivesoftware.community.action.JiveActionSupport;
/**
* A "Hello World" plugin that merely receives or returns a greeting.
* The JavaBeans-style accessors in this class are mapped to
* property names in resources/simple-template.ftl, which is the FreeMarker
* template the provides UI for displaying the data. The mapping is
* done by Struts after the class-to-FTL mapping in the
* struts.xml file.
*
* In other words, this class represents the plugin's data "model,"
* the FTL file provides its "view," and the code in the struts.xml
* file provides its "controller."
*/
public class SimpleAction extends JiveActionSupport {
private static final long serialVersionUID = 1L;
private String message = "Hello World";
/**
* Gets the greeting message. This method is mapped by Struts to the
* $message property used in simple-template.ftl. In other words,
* in rendering the user interface, Clearspace (via Struts) maps
* the $message property to a "getter" name of the form
* get<property_name> -- this getMessage method.
*
* @return The greeting text.
*/
public String getMessage() {
return message;
}
/**
* Sets the greeting message. The FTL file doesn't provide a way
* for the user to set the message text. But if it did, this is
* the method that would be called.
*
* @param message The greeting text.
*/
public void setMessage(String message) {
this.message = message;
}
}
Your action has a "view," or user interface. One important thing to notice is that the plugin class you just created and the view code you're about to create are in a way connected by a naming convention. In other words, the
getMessage method in the class is matched up with the message variable in the code below through a convention in which Clearspace knows that removing the method name's "get" and lower-casing the first letter "m" creates a match with the variable. (Although, actually Struts does all that behind the scenes.) You'll enable that mapping through the Struts file you'll create in a moment.
Note: The Struts documentation has introductory information on using FreeMarker with Struts actions.
<html>
<head>
<!-- Create a FreeMarker variable for the page title bar text, then
use that variable in the <title> element. -->
<#assign
<title>${pageTitle}</title>
<content tag="pagetitle">${pageTitle}</content>
</head>
<body>
<!-- Have the message appear a little down on the page and
in the center, so it's easier to find. -->
<br/>
<p align="center">${message}</p>
</body>
</html>
This is where you'll connect the pieces. The code below defines an that is associated with the SimpleAction action class. A "success" result returned by the class (something it does by default in this case) tells Clearspace to go get the simple-template.ftl template and process it by merging in the data that it retrieved from the class.
<!DOCTYPE struts PUBLIC "-//Apache Software Foundation//DTD Struts Configuration 2.0//EN"
"">
<struts>
<package name="example-actions" namespace="" extends="community-actions">
<!-- Map the action name, sayhello, to the action class, SimpleAction. -->
<action name="sayhello" class="com.example.clearspace.plugin.action.SimpleAction">
<!-- Specify the FTL file that should be used to present the data in the case of
a "success" result (the default for an action class). -->
<result name="success">/plugins/simpleexamples/resources/simple-template.ftl</result>
</action>
</package>
</struts>
That's it for this introduction to action plugins. As you can imagine, your action class could do much more — retrieve data from an external source (or from the Clearspace database using its API), perform calculations on data entered by the user, and so on. You could have multiple FTL files to provide different user interface responses to your plugin's state, such as what is returned by your action class. | http://docs.jivesoftware.com/clearspace/2.5.6/BuildingActions.html | 2012-05-24T04:40:59 | crawl-003 | crawl-003-007 | [] | docs.jivesoftware.com |
IConcurrentInputModeimplementation that uses a
StateMachineto manage its state.
This instance does nothing and needs to be subclassed in order to be useful.
protected var _lastEvent:Event
The last
Event that has been delivered to this instance.
protected var _lastMouseEvent:CanvasMouseEvent
The last
MouseEvent that has been delivered to this instance.
canceledState:State[read-only]
Returns the canceled state of the state machine.
The canceled state is the state the machine will be put into if the mode is canceled.
This method will create a cancaled state using
createCanceledState the first time it is queried.
public function get canceledState():State
lastMouseEvent:CanvasMouseEvent[read-only]
Returns the last mouse event.
public function get lastMouseEvent():CanvasMouseEvent
startState:State[read-only]
Returns the start state of the state machine.
The start state is the state the machine will be reset to if the mode is reset.
This method will create a start state using
createStartState
the first time it is queried.
public function get startState():State
stateMachine:StateMachine[read-only]
Gets the state machine.
Upon first access to instance, the machine will be initialized using
initializeStateMachine.
public function get stateMachine():StateMachine
stopEventRecognizer:Function[read-only]
An event recognizer for the state machine that is triggered if this mode has been stoppped.
protected function get stopEventRecognizer():Function
stoppedState:State[read-only]
Returns the stopped state of the state machine.
The stopped state is the state the machine will be put into if the mode is stopped.
This method will create a stopped state using
createStoppedState
the first time it is queried.
public function get stoppedState():State
public function StateMachineInputMode(stateMachine:StateMachine = null, startState:State = null, canceledState:State = null, stoppedState:State = null)
Creates a new instance using the given state machine and states.Parameters
override public function cancel():void
Runs the machine using the cancel and reset events, releases the input mutex and returns.
protected function createCanceledState(machine:StateMachine):State
Factory method that creates a canceled
State
for the given machine.
This implementation automatically connects the returned state to the start state.
ParametersReturns
protected function createStartState(machine:StateMachine):State
Factory method that creates a start
State for the given machine.
ParametersReturns
protected function createStateMachine():StateMachine
Factory method that creates the state machine.Returns
protected function createStoppedState(machine:StateMachine):State
Factory method that creates a stopped
State
for the given machine.
This implementation automatically connects the stopped state to the start state.
ParametersReturns
protected function initializeStateMachine(machine:StateMachine, startState:State, canceledState:State, stoppedState:State, finishedState:State):void
Called to initialize the state machine.
This implementation does nothing.
Parameters
override protected function installCore(context:IInputModeContext):void
Installs this mode into the given canvas.
Subclasses should override this method and call
super.install(canvas), first.
One-time initialization should be performed in the
initialize method.
The
install method will call the
initialize method the first
time this mode gets installed.
This implementation calls
installListeners.
Parameters
protected function installListeners():void
Installs all necessary listeners to trigger the
run method.
This implementation registers for all mouse events and keyboard events.
protected function isCancelEvent(evt:Event):Boolean
Method that identifies an event as a cancel event.
ParametersReturns
protected function isDisabledEvent(evt:Event):Boolean
Method that identifies an event as a disabled event.
ParametersReturns
protected function isEnabledEvent(evt:Event):Boolean
Method that identifies an event as an enabled event.
ParametersReturns
protected function isMutexAquiredEvent(evt:Event):Boolean
Method that identifies an event as a mutexAquired event.
ParametersReturns
protected function isMutexLostEvent(evt:Event):Boolean
Method that identifies an event as a mutexLost event.
ParametersReturns
protected function isStopEvent(evt:Event):Boolean
Method that identifies an event as a stop event.
ParametersReturns
protected function onCancelStateEntered(evt:StateChangeEvent):void
Called when the cancel state has been entered.
This implementation will release the input mutex and reset the preferred cursor. This will trigger another run of the machine which will normally bring the machine back to the start state.
Parameters
protected function onDisable():void
Runs the state machine using a disable event.
protected function onEnable():void
Runs the state machine using a enable event.
protected function onMachineReset():void
override protected function onMutexObtained():void
Runs the state machine using a mutex obtained event.
override protected function onMutexReleased():void
Runs the state machine using a mutex lost event.
protected function onRun(evt:Event):void
Callback method that will be called after the state machine has been run using the arguments provided.
This will trigger a run event.
Parameters
protected function onStopStateEntered(evt:StateChangeEvent):void
Called when the cancel state has been entered.
This will trigger another run of the machine which will normally bring the machine back to the start state.
Parameters
public function resetMachine():void
Runs the machine using a special reset event.
public function run(evt:Event):void
Tries to run the virtual machine using the pair of source and event argument to determine which transition to take.
If this method is called reentrantly it will not immediately execute the transition but queue the event.
Parameters
protected function setLastMouseEvent(evt:CanvasMouseEvent):void
Parameters
protected function setPreferredCursorTransition(cursorClass:Class):Function
Factory method that can be used to obtain an listener
implementation that sets the given
preferredCursor.
ParametersReturns
protected function setResetCursorTransition():Function
Factory method that can be used to obtain a listener implementation
that resets the
preferredCursor.
override public function stop():Boolean
Runs the machine using a special stop event.
If the machine arrives at the
startState, this method
will release the input mutex and return
true.
override protected function uninstallCore(context:IInputModeContext):void
Uninstalls this mode from the canvas.
Subclasses should always call
super.uninstallCore( canvas ) as the last
statement.
This implementation calls
uninstallListeners.
Parameters
protected function uninstallListeners():void
Uninstalls all listeners, this instance has installed on calling
intallListeners() | http://docs.yworks.com/yfilesflex/doc/api/client/com/yworks/canvas/input/StateMachineInputMode.html | 2012-05-24T04:30:54 | crawl-003 | crawl-003-007 | [] | docs.yworks.com |
Read Authentication and Authorization and then come back to this document.
Clearspace doesn't support any out of the box. However, we provide an example SSO plugin to ease implementation of this. See the explanation in Authentication and Authorization, and get the code from subversion here. Reading over this code and this document is the best way to see how it's done.
You can as of version 2.5.x.
If you used Jive Forums and Clearspace 1.0, you'll be familiar with the AuthToken, which was manually passed around into our proxy layer to check permissions. In 2.x, you no longer have this AuthToken to pass around. You have an Authentication object stored in your SecurityContext, automatically stored there for you and accessible everywhere.
Here is an example of how to get your current context
SecurityContextHolder.getContext().getAuthentication();
There is a class dedicated to doing this called the SystemExecutor. Here is an example of
SystemExecutor exec = new
SystemExecutor<Boolean>(this.authProvider);
Callable<Boolean> callable = new Callable<Boolean>() {
public Boolean call() throws Exception {
boolean result =
registrationManager.validateAccount(userIDFinal, validationFinal);
return Boolean.valueOf(result);
}
};
success = exec.executeCallable(callable);
If you're using Spring to inject beans into your context (object) then you can trust that you have secured objects. CS implements security by wrapping your objects in a proxy layer that handles permissions. If you access Jive services directly through JiveApplication, use JiveApplication.getEffectiveContext() to get objects that are appropriately proxied. In most cases you don't want to create your own Jive domain objects (such as Blog, BlogPost); you will almost always obtain those through a manager. There are very few cases where you'd want to instantiate your own objects, and if you do be very sure you're securing things to check appropriate permissions. The Action classes like CreateBlogPostAction, CreateDocumentAction is often the best reference for seeing how to create domain objects securely.
This is destined to be the permissions system of the future. It's already in place for blogs. It's fully functional, so it would be a good place to implement permissions for customizations or new content types. A good place to look to understand the API is the BlogPermHelper, or the set of functional tests for the API.
We've centralized all permission logic in the system into a set of classes called *PermHelper, e.g. BlogPermHelper, JiveContainerPermHelper, ProjectPermHelper. The proxy layer makes calls into these to do the checks.
For both DWR and web services, authorization checks are made at the proxy layer. DWR uses the same authentication session as the main web application. REST uses basic HTTP auth by default. In general, web services try to authenticate each request. | http://docs.jivesoftware.com/clearspace/2.5.5/SecurityFrequentlyAskedQuestions.html | 2012-05-24T04:31:11 | crawl-003 | crawl-003-007 | [] | docs.jivesoftware.com |
Overview
Griffon 0.9.2-rc1 – "Aquila audax" - is a maintenance release of Griffon 0.9.
New Features
Buildtime
Configuration flags
All of the command options described in section 4.6 Command Line Options of the Griffon Guide can now be specified in
griffon-app/conf/BuildConfig.groovy or
$USER_HOME/.griffon/settings.groovy.
Logging.
Runtime')
Breaking Changes.
Dependencies
The jar
org.springframework.test is no longer provided for the test configuration.
Sample Applications
Griffon 0.9.2-rc1. | http://docs.codehaus.org/display/GRIFFON/Griffon+0.9.2-rc1 | 2012-05-24T05:53:43 | crawl-003 | crawl-003-007 | [] | docs.codehaus.org |
files you'll want to customize for each theme. Note that this could be different custom versions of the same file for different themes.
Use browser-based tools such as the Firefox Web Developer and Firebug plugins for the Firefox web browser. These tools are useful for figuring out which CSS classes you should edit to change a particular part of the UI. Here's a shot of Firebug displaying HTML markup and CSS style corresponding to a blog post title.
Use the admin console to create and save your templates, but edit them with your own editor (preferably one that features syntax coloring). The | http://docs.jivesoftware.com/clearspace/2.5.5/ThemesBestPractices.html | 2012-05-24T04:32:13 | crawl-003 | crawl-003-007 | [] | docs.jivesoftware.com |
Spiders for the URLs specified in the start_urls and the parse method as callback function for the Requests.
In the callback function, you parse the response (web page) and return either Item objects, Request objects,.() for each of the resulting responses.. ... | http://readthedocs.org/docs/scrapy/en/latest/topics/spiders.html | 2012-05-24T05:19:34 | crawl-003 | crawl-003-007 | [] | readthedocs.org |
New and Noteworthy
Sonar support
There is a new
sonar plugin, which allows you to analyse your project using Sonar from your Gradle build.
Small changes
- The idea plugin now provides an
ideaModule.moduleNameproperty, to allow you to customize the name of the generated IDEA module.
- The application plugin now provides an
applicationNameproperty, to allow you to customize the name of the application.
- The
CopySpec.into(dest)and {{CopySpec.into(dest) { } }} methods now accept a closure as the destination path, to allow for lazy evaluation of the destination path.
Migrating from 1.0-milestone-1
Gradle 1.0-milestone-2 Breaking Changes
Fixed Jira Issues
Labels | http://docs.codehaus.org/display/GRADLE/Gradle+1.0-milestone-2+Release+Notes | 2012-05-24T04:22:58 | crawl-003 | crawl-003-007 | [] | docs.codehaus.org |
Installing Two-Factor Authentication
The Magento AdminThe password-protected back office of your store where orders, catalog, content, and configurations are managed. provides all access to your store, orders, and customer data. To further increase security to your Magento instance, add Magento Two-Factor Authentication (2FA), v3.0.0. Installing and enabling this module adds two-step authentication for all users attempting to access the Admin for all devices. All features and requirements are restricted to Admin user accounts, not extended to customer accounts.
At this time, Two-Factor Authentication can be installed only from the command line.
Two-Factor Authentication gives you the ability to:
- Enable authenticator support for the Admin.
- Manage and configure authenticator settings globally or per user account.
- Reset authenticators and manage trusted devices for users.
You can install the module using the following composer command:
If you already have installed a Magento instance, you need to run the following commands to enable the module:
After enabling and configuring Magento Two-Factor Authentication, your staff will need to:
For example, you can install the Google Authenticator app to a mobile device such as a smart phone or tablet. Depending on the OS, you can download and install the authenticator from Google Play or iOS App Store.
Additional logins to the Admin from new devices will require the entered code or attached device.
Follow the instructions in the user guide to enable and configure Two-Factor Authentication. | https://docs.magento.com/m2/2.2/ee/user_guide/magento/extension-install-two-factor-authentication.html | 2019-12-05T21:06:00 | CC-MAIN-2019-51 | 1575540482038.36 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.magento.com |
Visual Visual Visual Visual Class
Definition
public : class Visual : CompositionObject
struct winrt::Windows::UI::Composition::Visual : CompositionObject
public class Visual : CompositionObject
Public Class Visual Inherits CompositionObject
- Inheritance
- VisualVisualVisualVisual
- Attributes
-
Windows 10 requirements
Remarks
Visual objects compose and render serialized drawing content and form the basis of a retained mode visual system. The Visual class supports basic position and clipping and can have 2D and 3D transformations applied to them. Additional functionality like solid colors, images, and content with effects is provided through subclasses like SpriteVisual or ContainerVisual, and by setting the Brush property of the visual to CompositionBrush subclasses such as CompositionColorBrush, CompositionEffectBrush, or CompositionSurfaceBrush.
Visual objects are thread-agile and not bound to the UI thread.
Animatable properties
The following properties can be animated. Call CompositionObject.StartAnimation to associate the property with a CompositionAnimation.
- Size
- Offset
- Opacity
- Orientation
- CenterPoint
- RotationAngle
- RotationAngleInDegrees
- RotationAxis
- TransformMatrix
Rotation
Visual supports two forms of rotation:
axis-angle
Axis-angle rotation uses the RotationAngle, RotationAxis, and CenterPoint properties to specify the rotation in degrees, which axis to rotate around, and the center point of the visual to rotate around.
orientation
Rotation by orientation uses the Orientation property to specify a quaternion describing an orientation and rotation in 3D space.
Version history
Properties
Methods
See also
Feedback | https://docs.microsoft.com/en-us/uwp/api/windows.ui.composition.visual | 2019-12-05T20:40:57 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.microsoft.com |
Setting up a notary service¶
Corda comes with several notary implementations built-in:
- Single-node: a simple notary service that persists notarisation requests in the node’s database. It is easy to set up and is recommended for testing, and production networks that do not have strict availability requirements.
- Crash fault-tolerant (experimental): a highly available notary service operated by a single party.
- Byzantine fault-tolerant (experimental): a decentralised highly available notary service operated by a group of parties.
Single-node¶
To have a regular Corda node provide a notary service you simply need to set appropriate
notary configuration values
before starting it:
notary : { validating : false }
For a validating notary service specify:
notary : { validating : true }
See 验证 for more details about validating versus non-validating notaries.
For clients to be able to use the notary service, its identity must be added to the network parameters. This will be done automatically when creating the network, if using Network Bootstrapper. See 网络 for more details.
Crash fault-tolerant (experimental)¶
Corda provides a prototype Raft-based highly available notary implementation. You can try it out on our notary demo page. Note that it has known limitations and is not recommended for production use.
Byzantine fault-tolerant (experimental)¶
A prototype BFT notary implementation based on BFT-Smart is available. You can try it out on our notary demo page. Note that it is still experimental and there is active work ongoing for a production ready solution. Additionally, BFT-Smart requires Java serialization which is disabled by default in Corda due to security risks, and it will only work in dev mode where this can be customised.
We do not recommend using it in any long-running test or production deployments. | https://cncorda.readthedocs.io/zh_CN/latest/running-a-notary.html | 2019-12-05T20:25:57 | CC-MAIN-2019-51 | 1575540482038.36 | [] | cncorda.readthedocs.io |
Using the DataStax Installer to install (root permissions required)
Instructions for installing DataStax Enterprise 5.0 using the DataStax Installer when you have root permissions. You can install or upgrade on any Linux-based platform using this installer.
Instructions for installing DataStax Enterprise using the DataStax Installer when you have root permissions. You can install or upgrade on any Linux-based platform using this installer. If you don't have root permissions, use Using the DataStax Installer to install (root permissions not required). To install earlier versions, see Installing DataStax Enterprise 5.0.x patch releases.
Prerequisites
- Be sure your platform is supported.
- Root or sudo access.
-
In a terminal window:
- Download the.0.15-linux-x64-installer.run
- To view the installer help:
./DataStaxEnterprise-5.0.15-linux-x64-installer.run --help
Help displays a list of the available options and their default settings.
- Start the installation:
- No configuration parameters:
sudo ./DataStaxEnterprise-5.0.15-linux-x64-installer.run sudo ./DataStaxEnterprise-5.0.15-linux-x64-installer.run --mode text
- Configuration parameters:
sudo ./DataStaxEnterprise-5.0.15-linux-x64-installer.run --prefix /usr/share/dse --enable_vnodes 0 ## Command line option. sudo ./DataStaxEnterprise-5.0 Services and Utilities:
- Service Setup:
- No Services: This installation sets up DataStax Enterprise as a standalone process. It does not require root or sudo access. See Using the DataStax Installer to install (root permissions not required).
- Services and Utilities: This installation sets up DataStax Enterprise as a service. It installs DataStax Enterprise in system locations.
- Install Type
- Simple: Installs DataStax Enterprise using the default path names and options:
- Advanced: Allows you to configure path names and options:
- Set up the node:
- the node type:
- Set up remaining options:
The available options depend on the type of installation, permissions, and your previous selections.
- Optional: Set up the Preflight Check:
The Preflight tool is a collection of tests that can detect and fix a node's configuration. The tool can detect and fix many invalid or suboptimal configuration settings. It is not available in tarball or No Services installations.
- Change the default user and user group (Advanced Installations only):
-:
- If DataStax Enterprise is not already running:
sudo service dse startNote: For other start options, see Starting DataStax Enterprise as a service.
- Verify that DataStax Enterprise is running:Using vnodes:Not using vnodes:
nodetool status
What's next
- | https://docs.datastax.com/en/datastax_enterprise/5.0/datastax_enterprise/install/installGUIdse.html | 2019-12-05T19:56:39 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.datastax.com |
Changing the default superuser
You can change the default superuser from the default cassandra user.
By default, each installation of Cassandra includes a superuser account named cassandra whose password is also cassandra. Superuser permissions allow creation and deletion of other users and the ability to grant or revoke permissions.
-.
- Restrict rights of users as appropriate for security. For example, do not allow access to other keyspaces.
- Follow these steps to change the default superuser.
At installation, OpsCenter Lifecycle Manager prompts you to change the default superuser password.
Procedure
- Configure authentication if you have not already done so.
- Create another superuser, not named cassandra, using the CREATE ROLE roles, and then authorize roles to access the database objects by using CQL to grant them permissions on those objects.
CQL supports the following authentication statements: | https://docs.datastax.com/en/datastax_enterprise/5.0/datastax_enterprise/unifiedAuth/chgDefaultSuperuser.html | 2019-12-05T19:43:33 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.datastax.com |
ASPxScheduler, as well as other DevExpress ASP.NET controls, offer an advanced client-side API, in addition to the comprehensive server-side object model. This enables web applications based on DevExpress web controls to work more efficiently, using a combination of server-side and client-side processing.
The client-side API is implemented in the JavaScript file. This file is sent to the client browser when a web application that uses the web control is run. The filename is the same as the name of the control. The client API becomes available if the ASPxScheduler.ClientInstanceName property is specified or any client-side event is handled.
Do not confuse the properties and methods of server and client-side controls. Often they have identical or similar names, but this is not always the case. To identify a ASPxScheduler control in the server code, you should use the Name of this control. To get access to the ASPxClientScheduler using JavaScript, you should specify its ClientInstanceName.
Appointments on the client side are represented by the ASPxClientAppointment class objects. They differ from server-side appointments in IDs and in the set of properties available. Use the client-side method ASPxClientScheduler.GetAppointmentById to get the client-side appointment by its ID or the server-side method ASPxScheduler.LookupAppointmentByIdString to get the server-side appointment by its client ID.
Often, you'll find it useful that client appointments always possess certain properties taken from the list of defined mappings, with values retrieved from the corresponding properties of the server-side appointments. The ASPxScheduler.InitClientAppointment event is helpful in this case. Handle this event to specify appointment properties available at the client side. This event fires for each visible appointment before it is sent to the client for display, so you can vary the properties and custom fields. You can also create a specific property to send arbitrary data to the client side.
When it is necessary to obtain server information on the client side, use the JavaScript Custom Properties mechanism. The ASPxScheduler.CustomJSProperties event enables you to declare temporary client properties to store the necessary information. Once declared, a property can be accessed on the client, using common syntax.
Although the entire client API is available on the client side, it's strongly recommended that only the documented public client-side API are used to realize any custom client functionality.
By default, for all end-user actions that do not require re-rendering of the Scheduler's layout, the client Scheduler receives the required data in JSON and updates its view accordingly. The client-side rendering greatly reduces the amount of markup that should be produced on the server, which significantly improves a web application's overall performance. The value of the ASPxScheduler.EnableClientRender property determines whether or not client-side rendering is used.
The following end-user actions trigger client-side rendering of appointments and view elements.
Keep in mind that certain customization scenarios require appointments and view elements to be rendered on the server-side. When such customizations are performed, client-side rendering of corresponding elements is automatically turned off, which can affect your web application's performance.
Using the following customization scenarios automatically disables client-side rendering of view elements.
Using templates to customize the presentation of appointments disables client-side rendering of both appointments and views. | https://docs.devexpress.com/AspNet/5215/aspnet-webforms-controls/scheduler/concepts/client-side-functionality | 2019-12-05T19:55:56 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.devexpress.com |
Recurly is integrated with Adyen to support credit / debit cards. With this new integration, you get access to yet another enterprise-level, global payment processor.
Supported payment methods
The current API integration supports credit/debit cards only. There are additional payment methods available, such as SEPA, via our integration with the Adyen Hosted Payment Pages. If you are interested in processing other payment methods through Adyen, please contact Recurly Support and tell them specifically which payment methods you wish to accept.
Accepting payments through Adyen
To start processing payments through Recurly and Adyen, you must follow the instructions below on how to configure your account in Adyen and how to configure the Adyen gateway in Recurly.
Configuration in Adyen
- Log in to the Adyen Customer Area.
- Click “Settings” >> “Users” >> “Add New User”.
- Create a new user with webservice role. A username and password is automatically generated for you. You will need the username and password while configuring your Adyen gateway in Recurly. Note: The password is not visible, so it’s best to change the password and note down the new password for when you enter it in Recurly.
- Enable the Webservice user with the "Checkout Webservice role without CSE".
- If you are within scope of the PSD2 Mandate, please be sure to enable support for "Network Transaction References". Once enabled, a new transaction ID value will be returned by Adyen when processing transaction requests.
- Have Adyen support enable the “API PCI Payments role” for your web services user. Recurly will process transactions through Adyen by invoking their API’s. The API permission on Adyen merchant accounts is not enabled by default.
- Have Adyen support also enable the “Acquirer Result” and “Raw Acquirer Result” in the API response. The API responses will include CVV and AVS response codes. You will be able to view this data is Recurly transaction details page.
- Configure the callbacks URL for the merchant account under “Settings” >> “Server Configuration” >> “Standard Notifications”
- Enter the value<MERCHANT_SUBDOMAIN> where "merchant_subdomain" is the subdomain for your Recurly site.
You can find your Recurly subdomain by looking at the URL when logged into Recurly. In the example above, "kalekrate" is the subdomain. The Adyen URL for responses for this example customer would be
- Ensure that the "Active" checkbox is checked per the picture below:
Configuration in Recurly
- Add the Adyen gateway under Configuration >> Payment Gateways >> Add New Gateway
- Provide username, password, and merchant account from Adyen configuration dashboard
- Custom Endpoint (recommended - see below)
- Select Zero Dollar Authorization for all the card types
- Save configuration
Custom Endpoint: Adyen offers custom endpoints to merchants so that your payments are processed without any issues. We strongly recommend that you take advantage of this new Adyen feature.
Custom Endpoint field in Recurly: Adyen will provide you with a URL like this https://<f96e63be5147-TestCompany>.pal-live.adyenpayments.com/pal/servlet/Payment/v18/authorise.
For production, you need to enter this portion of the endpoint into the Custom Endpoint field in Recurly "f96e63be5147-TestCompany" and not the entire URL.
For testing (sandbox Recurly + sandbox Adyen), you can enter any value. For example, you could enter "test" into this field, though please keep in mind that it'll need to be updated later once you're ready to move to production.
Whitelisting IP addresses
In some cases, Adyen will use additional IP addresses for certain customers. These IP addresses need to be whitelisted in Recurly before transactions can be successfully processed. Please contact Recurly Support before you move your site into Production mode so that we can work with Adyen to get the IP addresses and have them whitelisted.
Note: Recurly submits the purchase transactions to Adyen with a flag to capture the payments immediately (overriding your configuration in Adyen).
Note: Recurly's integration with Adyen does not currently support Level 2 / Level 3 card data.
Recurly offers support for ACH transactions via Adyen. To learn more about ACH payments, visit our ACH Bank Payments docs page.
To set up your ACH gateway, add the 'Adyen ACH' gateway from the "Add Payment Gateway" page, and complete the process for adding a gateway outlined here.
Also ensure that the configuration outlined below is completed in order to properly process ACH transactions.
With Recurly's integration with Adyen's Hosted Payment Pages (HPP), you gain the ability to accept the payment methods currently available via Adyen’s hosted pages for one-time and/or recurring subscription purchases through Recurly. We have done the work of integrating with Adyen's HPP for you so all you need to do is make one simple update to your Recurly integration in order to get access to the local payment methods your customers want to use. If you are interested in using Adyen's HPP through Recurly, please contact Recurly Support.
The payment methods we currently offer on Adyen HPP are:
- SEPA
- iDEAL - recurring charges are billed in SEPA.
- SOFORT (Klarna Instant Bank Transfer) - recurring charges a billed in SEPA.
- Qiwi - Supported for one-time purchases only. The customer flow requires entering a code that is sent to them via SMS/Text, which is not supported with our integration for recurring payments.
NOTE:
- Adyen's HPP supports a long list of payment methods. Enabling other payment methods on Adyen HPP beyond those listed above requires additional Recurly engineering effort. Therefore, if you need other payment methods for your business and customer base, please let us know when you contact Support. We will prioritize adding payment methods based based on this demand. (Not all payment methods support recurring payments).
- ACH is a bank payment method available in the United States only. It is distinct and different from other bank payment methods available in other regions or countries, such as SEPA (EU), BACS (UK), and BECS (AU).
How it works
From a customer perspective
From a customer perspective, when paying with a payment method available through the Adyen HPP, the experience is similar to paying with PayPal where the customer is redirected from your checkout page to a new page hosted by Adyen where they select which payment method they want to use, and then they visit the appropriate page to enter their payment info. When they are done entering their payment info, they return to your checkout page.
You can customize the Adyen hosted pages so that it looks and feels like your own checkout page. Review the Adyen HPP skin documentation for information on how to set up and configure their hosted pages.
From a merchant perspective
From a merchant perspective, when the customer lands on your checkout page and indicates that they want to purchase using one of the payment methods available on via the Adyen HPP, you make an API call to Recurly with some details about the customer. We calculate the amount due and pass that along to Adyen when you redirect the customer to the Adyen HPP. When the customer completes the purchase, Adyen provides Recurly with the payment details, and we use the information to update our systems with a record of what happened. For more detailed information on the API integration, please refer to the API integration section of this page.
After your customer completes their purchase, the information about the customer, subscription, invoice and payment is available in Recurly (since Recurly records the payment method returned from Adyen), via the API, and is included in our webhooks. With this information, you can see which payment methods are performing best, and adjust your business accordingly.
For example, on the /transactions page you'll see the payment method used for payments captured via the Adyen HPP listed alongside the payments captured through Recurly.js. This same data appears in the transactions export, the relevant Recurly APIs, and in the relevant Recurly webhooks.
For refunds / voids
Refunds and voids work the same way as other payments captured through Recurly.js
For customer updates to billing information
If your customer wants to update their billing information (e.g. replace their current SEPA details with new SEPA details) you can direct them to Recurly's hosted pages to make the update. Adyen HPP does not support the concept of an "update" to existing billing information, so in fact what will happen is that Recurly will charge the new billing info for the minimum amount allowed for the payment method, and when Adyen confirms that the new billing information is valid (i.e. that the new purchase was approved / completed) then Recurly will automatically refund the purchase.
A note about asynchronous payment methods
Some of the payment methods available via the Adyen HPP, like SEPA, are asynchronous - i.e. they take several days for the payment to settle. Because of this nature, purchases made with a payment method that is asynchronous are handled slightly differently in Recurly:
- The subscription is marked as "active" as soon as your customer successfully completes their purchase, but since the payment hasn't actually been approved by the customer's bank, the invoice and it's related transaction remain in a "processing" state until Adyen confirms that the payment has been approved.
- At that time, Recurly will issue a "processing payment" webhook to any endpoints you have configured in Recurly.
- At that time, Recurly will also issue a "payment processing" email to your customer (if you have enabled that email).
Until we receive confirmation from Adyen that the asynchronous payment was actually approved, Adyen will not successfully process any additional payment requests against that billing information.
- Adyen returns a token for Recurly to use for subsequent purchases as soon as the billing information is successfully entered, but the token is in an "unverified" state because the payment hasn't actually been approved by the customer's bank.
- Within 48 hours, Adyen receives confirmation from the customer's bank that the payment has been approved by the customer's bank. When Adyen gets that confirmation, they issue a notification to Recurly that alerts us to the fact that the token is now "verified."
- Only when the token is "verified," can additional purchases and subscription renewals be made against the stored billing information on the account. Before the token is confirmed as being "verified" additional purchases attempted will fail.
When Adyen receives confirmation from the bank that the payment has been approved:
- Adyen issues a notification to Recurly that the transaction has been approved. At that time, Recurly will update the status of the transaction and invoice in our system with the appropriate status.
-).
Dunning and retries:
- Invoices paid for with an asynchronous payment method that enter dunning must also be treated differently than invoices paid for with a synchronous payment method. Refer to the description of PayPal eChecks for an example showing the sequence of events and what happens in each step to the invoice and transaction.
Configuring Adyen to send Recurly the status updates
- In order to correctly reflect the Adyen HPP transaction status in Recurly, it is important that you configure Adyen to send the appropriate notifications and reports to Recurly. Please refer to the "Configuration" section below for details:
Configuration
Enable payment methods
Adyen's HPP supports a long list of payment methods. Contact Adyen and ask them to enable their HPP for the payment methods you are interested in.
NOTE: Be sure that Recurly Support is aware of the payment methods that you want to process before you start processing payments.
- Please note that some of these payment methods available on Adyen's HPP are suitable for recurring payments, and some are not. Look at the "Recurring" column of the tables on the Adyen site. If Recurring = "Yes" then Recurly can use that payment method for subscription purchases and one-time purchases. If Recurring = "No" then Recurly can only use the payment method for one-time purchases.
- If you are accepting credit / debit cards (e.g. Visa, Mastercard, JCB, Diners), these payment methods should be used with the standard Recurly.js implementation. This ensures that Recurly vaults the card information and can provide you with the benefit of using Recurly's Account Updater service should a payment be declined.
Set up a skin code for new purchases
- In the Adyen portal, navigate to Skins
- Add a new skin, and give it a description that makes it clear that the skin will be used for the Recurly / Adyen HPP integration
- In the skin configurations, click the button to "Generate new HMAC key" and copy / paste that value into the "HMAC KEY"":.
- Click "Payment options" and enable the payment methods that you want customers to be able to choose. If you don't see a payment method that you want to enable on your HPP, contact Adyen and ask them to make the payment method available.
- Complete any other configuration you want to make of your skin, and be sure to save all changes
Set up a skin code for payment info updates (e.g. HAM)
You need to set up at least 2 skin codes. The one above is used for your checkout page. The second skin you need to configure is for when your customer wants to update his/her billing information. Repeat the same steps as above, but with the following differences:
- The skin description should make it clear that this skin is to be used for billing info updates for the Recurly / Adyen HPP integration
- In the skin configurations, click the button to "Generate new HMAC key" and copy / paste that value into the "HMAC KEY BILLING UPDATES": SUBDOMAIN.recurly.com/account/adyen_update (replace "SUBDOMAIN" with your Recurly site's subdomain).
Configure the "report credentials"
- Log into the Adyen portal, select "settings" and "users"
- If you don't yet have a "reporting user" create a new user and select "Report User" for the "User Type":
- Enter the report user name in the "REPORTS USERNAME" field in Recurly and the password in the "REPORTS PASSWORD" field in Recurly
- Navigate to Reports, and scroll down to the "Finance" section of the page
- Click the "Subscribe" button for the Payment Accounting Report
- Select "comma-separated value", and click save to subscribe to the report
Configure Adyen to send notifications to Recurly
- In the Adyen portal, navigate to Settings / Server Communication
- There are 4 notifications that you need to add in Adyen to ensure that Recurly receives all of the updates we need:
- Direct-Debit Pending Notification
- Generic Pending Notification
- Report Notification - NOTE: be sure to select "Report Available"
- Standard Notification - NOTE: be sure to select "Authorisation":
API Integration with Recurly
The normal flow for registering a purchase using Recurly.js normally begins with a payment natively handled by Recurly, i.e. through Recurly.js, the customer enters their billing information first, and then when we have authorized the billing info and know that it's valid, then, you create the subscription on the account and we charge the billing information on the account for the purchase. When using the Adyen HPP, however, there is no separate authorization of the billing information separate from the purchase of the subscription or one-time product. Instead, by the time the customer finishes interacting with the Adyen HPP, the payment has already been sent, billing information has been saved with Adyen, and the charge has been made. Therefore, in Recurly we need to record the fact that the charge has already happened.
The recommended API integration flow for the Adyen HPP is as follows:
Create Pending Purchase using the /purchases endpoint
- Send an API request to
v2/purchases/pendingwith a subscriptions payload. The API request needs to include the data element " <external_hpp_type>adyen</external_hpp_type>"
- On success, the subscription_UUID will be returned and used in subsequent steps.
The configurations described above are required in order to have a successful response.
Here is an example of the API request payload - note the <external_hpp_type>adyen</external_hpp_type> field
<purchase> <collection_method>automatic</collection_method> <currency>USD</currency> <customer_notes>Some notes for the customer.</customer_notes> <terms_and_conditions>Our company terms and conditions.</terms_and_conditions> <vat_reverse_charge_notes>Vat reverse charge notes.</vat_reverse_charge_notes> <account> <account_code>[email protected]</account_code> <email>[email protected]</email> <billing_info> <first_name>andy</first_name> <external_hpp_type>adyen</external_hpp_type> <country>US</country> </billing_info> </account> <adjustments> <adjustment> <product_code>4549449c-5870-4845-b672-1d07f15e87dd</product_code> <quantity>1</quantity> <revenue_schedule_type>at_invoice</revenue_schedule_type> <unit_amount_in_cents>1000</unit_amount_in_cents> </adjustment> </adjustments> <subscriptions> <subscription> <plan_code>basic</plan_code> <quantity>1</quantity> </subscription> <subscription> <plan_code>sobasic</plan_code> <quantity>1</quantity> </subscription> </subscriptions> <coupon_codes> <coupon_code>coupon1</coupon_code> </coupon_codes> </purchase>
- This will create a record in the Recurly database with serialized information around the account, subscription, invoice, and billing_info. This table is called pending_purchases. This table is invisible to you, as it only serves as a placeholder while we wait for your customer to complete the purchase.
- Capture & Activate Pending Subscriptions
- Invoke the Adyen HPP Modal via Recurly.js, by passing in the invoiceUuid and skinCode. See example. Recurly will pass along the amount of the purchase, along with the desired currency and other information that Adyen needs in order to know how much to charge the customer.
- Your customer will then enter his/her billing information in order to complete their purchase.
- After a successful capture, Adyen returns an API response to Recurly, confirming that the customer completed the HPP form successfully, how much they were charged, the currency etc... When we get confirmation from Adyen that the customer completed the purchase, we create the account and its associated billing info. In addition, in Recurly App you will now see:
For synchronous transactions:
- An active Subscription
- A successful Transaction with a completed state and appropriate payment logo
- A paid Invoice that is closed
- Saved Billing Information
For asynchronous transactions:
- An active subscription
- A "processing" Transaction with the appropriate payment method logo
- A "processing" Invoice that is open
- Saved Billing Information
Final considerations / notes
- Since Adyen is capturing the billing information, Recurly doesn't vault the payment details for invoices paid for via the Adyen HPP. As a result, subsequent renewals are excluded from the Account Updater service. They are, however, included in any retries should the renewal payment fail with a soft decline reason code.
- Certain payment methods enabled through the Adyen HPP have regulations around customer notification (e.g. SEPA requirements about notifying customers about the upcoming renewal). Before you start collecting payments from a new payment method, be sure that your customer experience has been updated to comply with these regulations.
- Recurly supports the exporting of billing info from Adyen into Recurly for recurring subscription renewals for the SEPA payment method (includes SEPA, iDEAL, and SOFORT).
Limitations
- As mentioned above, asynchronous payment methods take up to 48 hours to confirm that the billing information on the account can be used for subsequent payments. Until the account's billing info is verified by Adyen, any payment requests sent to Adyen for subsequent charges and/or new subscriptions on the account will be declined.
- Purchases routed through the Adyen HPP will always result in new accounts being created in Recurly. Existing accounts in Recurly will need to continue using the payment methods available natively in Recurly (e.g. Credit Cards, PayPal, ACH, Amazon Pay etc...).
- Credit and debit card transactions should be processed using RJS and / or the Recurly API, as the Adyen HPP integration does not currently support credit and debit cards.
- Subscriptions purchased through the Adyen HPP cannot have a future start date.
- SEPA only supports payments in EUR. Because of this, SEPA and iDEAL transactions must be charged in EUR to process correctly. | https://docs.recurly.com/docs/adyen | 2019-12-05T20:52:21 | CC-MAIN-2019-51 | 1575540482038.36 | [array(['https://files.readme.io/f8a19da-Screen_Shot_2017-05-04_at_1.57.24_PM.png',
'You can find your Recurly subdomain by looking at the URL when logged into Recurly. In the example above, "kalekrate" is the subdomain. The Adyen URL for responses for this example customer would be https://callbacks.recurly.com/adyen/kalekrate'],
dtype=object) ] | docs.recurly.com |
Contents IT Operations Management Previous Topic Next Topic Cloud administrator responsibilities Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Cloud administrator responsibilities Cloud administrators have two primary responsibilities; configure the Cloud Management service, and monitor and manage the services. Cloud administrators are members of the Virtual Provisioning Cloud Administrators group. Perform the following tasks to prepare the Cloud Management service: Install and configure the Cloud Management application for the provider (for example, AWS, Azure, or VMware) Set properties for Cloud Management Run Discovery on the cloud resources Obtain templates and approve some templates to be used to create catalog items Define catalog items for both VMs and more complex offerings Configure default lease settings Set pricing for the catalog items Define and activate provisioning rules Define and activate tagging rules Define change control parameters for cloud resources Customize the user experience: Provisioning rules and UI policies Define the schedule for downloading billing data Typical day-to-day tasks of a cloud administrator: Approve change requests associated with modifications to cloud resource View pending approvals for cloud resources View and analyze summary data on cloud resource deployments Monitor requests and key metrics for cloud resources The Virtual Provisioning Cloud Administrators group has or inherits these roles: itil cloud_admin cloud_operator cloud_user Cloud Admin portalCloud admins can use the Cloud Admin portal to view charts and reports on metrics, resource optimization, chargebacks, and requests for virtual assets.Change Control for Cloud ManagementA cloud administrator can configure the system to create change requests for specific modifications to virtual machines. Service level management for Cloud ManagementWhen activated, the Orchestration plugins for Amazon EC2, VMware, and Microsoft Azure create service level agreements (SLA) and operational level agreements (OLA) for Cloud Management. Define the schedule for downloading billing dataTo obtain the data that appears on billing reports, a job (scheduled script) downloads the billing data from the provider to the ServiceNow database.Configure default lease settingsProperties specify the default lease period and maximum allowed duration of a virtual server lease for all cloud providers.Define an eviction policy for VM snapshotsTo conserve disk space, you can define an eviction policy that deletes existing snapshots when the count of snapshots equals the specified snapshot limit setting, and when a scheduled job attempts to take a new snapshot.Define provisioning rulesProvision.AWS Cloud administrator tasksThe primary responsibilities of a cloud administrator are to configure the Cloud Management service, and to monitor and manage the services.Azure Cloud administrator tasksThe primary responsibilities of a cloud administrator are to configure the Cloud Management service, and to monitor and manage the services.Configure membership in VMware groupsAdd users to the VMware Approvers and VMware Operators groups.Add IP pools to VMware networksvCenter contains VMware networks that Discovery and the Discover vCenter Data utility add to the CMDB as VMware CIs. A VMware network record names the network and identifies one or more IP pools assigned to it. Create a vCenter templateCreate the Windows and Linux templates within vCenter or use existing templates. The system uses the templates when creating VMs.Create a catalog item for VMwareProvision VMware virtual machines using vCenter from the ServiceNow instance. Price a VMware componentAll prices for virtual servers or modifications to virtual servers are calculated from the per-unit price for CPU, Memory, and Data disk size.Guest customizationGuest customizations are operating system-specific customizations for virtual machines.Configure VMware vCenter datastoresYou can modify recently provisioned space and reserved space fields in VMware vCenter datastores to manually adjust space usage. You can also block VM provisioned space or reserve extra space for specified datastores.Related conceptsCloud user groups On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/helsinki-it-operations-management/page/product/azure-cloud-provisioning/concept/c_CloudAdminTasks.html | 2019-12-05T19:47:45 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.servicenow.com |
Launch this Stack
Bitnami Kubernetes Sandbox Stack for Oracle Cloud Infrastructure Classic
Bitnami Kubernetes Sandbox provides a complete, easy to deploy development environment for containerised apps. It is a realistic environment to learn and develop services in Kubernetes.
IMPORTANT: This stack should not be used in production environments.
For more information about Kubernetes, check Bitnami Kubernetes documentation and the official Kubernetes website.
In Bitnami’s official Kubernetes documentation, you can find how-to guides for essential cluster operations, such as:
- Create your first Helm Chart
- Configure RBAC in your Kubernetes Cluster
- Secure Kubernetes Services with Ingress, TLS and LetsEncrypt
For a deeper understanding of Kubernetes API Objects, visit the Kubernetes official documentation.
Need more help? Find below detailed instructions for solving complex issues. | https://docs.bitnami.com/oracle/infrastructure/kubernetes-sandbox/ | 2019-12-05T19:28:32 | CC-MAIN-2019-51 | 1575540482038.36 | [] | docs.bitnami.com |
Image Browser
By default the Insert Image tool opens a simple dialog which allows you to type in or paste the URL of an image and, optionally, specify a tooltip.
Overview
From Q3 2012 release onwards the Editor supports a new way of picking an image by browsing a list of predefined files and directories. Uploading new images is also supported.
The image browser needs a server-side implementation to retrieve and upload the files and directories.
Configuration parameters below and does not expect a response:
{ "name": "New folder name", "type": "d", "path": "foo/" }
read- sends the
pathparameter to specify the path which is browsed and expects a file listing in the format below:
[ { "name": "foo.png", "type": "f", "size": 73289 }, { "name": "bar.jpg", "type": "f", "size": 15289 }, ... ]
Where
nameis the file or directory name,
typeis either an f for a file or a d for a directory, and
sizeis the file size (optional).
destroy- makes a request with the following parameters:
name- the file or directory to be deleted.
path- the directory in which the file or the directory resides.
type- whether a file or a directory is to be deleted (an f or a d).
size- optional, the file size, as provided by the
readresponse.
upload- makes a request to the
uploadUrl. The request payload consists of the uploaded file and expects a
fileobject in response:
{ "name": "foo.png", "type": "f", "size": 12345 }
All of these can be changed through the
imagebrowser configuration.
See Also
Other articles on the Kendo UI Editor:
- Overview of the Editor Widget
- Post-Process Content
- Set Selections
- Pasting
- Prevent Cross-Site Scripting
- Troubleshooting
- Editor JavaScript API Reference
For how-to examples on the Kendo UI Editor widget, browse its How To documentation folder. | http://docs.telerik.com/kendo-ui/controls/editors/editor/imagebrowser | 2016-09-25T00:17:28 | CC-MAIN-2016-40 | 1474738659680.65 | [array(['/kendo-ui/controls/editors/editor/editor-insert-image.png',
'Insert Image Dialog'], dtype=object)
array(['/kendo-ui/controls/editors/editor/editor-image-browser.png',
'Image Browser Dialog'], dtype=object) ] | docs.telerik.com |
Accessing the underlying PHPCR session¶
PHPCR-ODM builds on top of the PHPCR api. You can access this to do operations
not provided by the DocumentManager. However, if you do any data manipulations,
you risk to get the DocumentManager out of sync. If you do not know exactly what
you do, it is recommended to flush before accessing the PHPCR layer, and then not
use the DocumentManager any longer. To flush the operations you did on PHPCR layer,
you can call
SessionInterface::save()
Getting the PHPCR Session¶
The DocumentManager provides access to the PHPCR session.
<?php $session = $documentManager->getPhpcrSession(); // do stuff $session->save();
The Node field mapping¶
Using the node mapping, you can set the
PHPCR\NodeInterface to a field of your document.
This field is populated on find, and as soon as you register the document with the manager using persist().
- PHP
<?php /** * @Document */ class MyPersistentClass { /** @Node */ private $node; }
- XML
<doctrine-mapping> <document class="MyPersistentClass"> <node fieldName="node"/> </document> </doctrine-mapping>
- YAML
MyPersistentClass: node: node | http://docs.doctrine-project.org/projects/doctrine-phpcr-odm/en/latest/reference/phpcr-access.html | 2016-10-21T11:07:13 | CC-MAIN-2016-44 | 1476988717963.49 | [] | docs.doctrine-project.org |
When you hit your ball into a black hole, it is transported to the exit and ejected at the angle of the exit at a speed directly relational to the speed your ball was going. Choose → to see which Black Hole goes to which exit and which direction the ball will come out of the exit. The rim around Black Holes are the same color as their corresponding exits. Hit the ball into the black hole, which will then eject the ball into the cup so you can go to the next hole. | https://docs.kde.org/stable4/en/kdegames/kolf/black-holes.html | 2016-10-21T11:15:11 | CC-MAIN-2016-44 | 1476988717963.49 | [array(['/stable4/common/top-kde.jpg', None], dtype=object)] | docs.kde.org |
Django Podcasting¶
Audio podcasting functionality for django sites.
The source code for Django Podcasting can be found and contributed to on django-podcasting. There you can also file tickets.
History¶
Django Podcasting started off as a heavily stripped down version of the wonderful django-podcast. Eventually this app grew enough differences to be useful to others as a reusable applciation outside of my sandbox. I hope it inspires you to share your sounds with the rest of the world, whatever they may be.
This application can seen running on:
Differences¶
At the time I had no interset in the Video podcasting features of django-podcast and video introduces a lot of extra complexity into the application, considering I was first studying compliance with the various specs and the syndication feed framework.
This application also differs from django-podcast in that it uses UUID identifiers, support multiple authors,makes use of Django’s sites framework and syndication feed framework. Podcasting only supports Django 1.3 or greater due to its choice in class-based views, though writing additional views and urls to work with 1.2 would be a trivial task. There are also other less significant diffences which may or may not be of relavance to your project.
Nomenclature¶
An individual podcast is a
show.
A
show has many
episodes 001, 002, etc.
An
episode has one or many
enclosures formats for example
.flac, .wav or .mp3.
Features¶
- Feeds
- Supports Atom, RSS 2.0, iTunes, and FeedBurner by attempting to match as best as possible their detailed specifications and additionally utilizing Django’s syndication feed framework.
- Multi-site
- Supports Django’s sites framework allowing for great flexibility in the relationships between shows and sites in a multi-site application.
Licensing
To publish a podcast to iTunes it is required to set a license. It is suggested to install django-licenses which provides a light weight mechanism for adding licenses to the shows.
- Serve your media from anywhere
- Podcasting assumes nothing about where your media file will be stored, simply save any valid url to an enclosure object.
- Multiple enclosure types
- Want to offer versions in .ogg, .flac, and .mp3? It’s possible.
- UUID
- Podcasting uses a UUID field for show and episode uniqueness rather than relying on the url.
- Bundled Forms and Templates
- Podcasting comes with some example forms to get you started with for allowing site users ability to manage a show. Generic templates are also bundled to get you started.
To add commenting to you app, you must use a separate Django application. One of the simplest options is django-disqus, but you should also look into django-threadedcomments and Django’s built in comments framework.
There is an field on both the Show and Episode models to enable commenting. The default is to enable commenting. To completely disable comments for all of an individual show’s episodes, set
enable_commentsfield on the Show model to
False. To disable comments on an individual episode, set
enable_commentson the Show model to
Trueand
enable_commentson the Episode model to
False.
- Draft Mode
You may work on the new episode in and publish it when ready, simply by checking publish in the Admin. While in draft mode the episode’s
get_absolute_urlreturns a link comprised of the show_slug and the episode’s
uuidbut once live, it uses the show slug, friendlier publish date and episode slug.
Optional Features¶
The following features are expected to work with the most recent versions of the following libraries, if you find an issue please report it on github.
- Thumbnailed Album Artwork Install django-imagekit,
- easy-thumbnails or sorl-thumbnails_ in your project to get sane defaults and model support for album artwork thumbnails. Either may be is added to your project at point any time and the
django-podcastingapp will recognize and use it. It is highly advised to use a thumbnailing app because thumbnailing podcast artwork for iTunes is nontrivial. Support for other thumbnail libraries will be considered for inclusion.
- Taggable episodes and shows
- Install django-taggit to provide tagging support for episodes and shows. Taggit may be is added to your project at point any time and the
django-podcastingapp will recognize and support it. Taggit may become a requirement in 1.0 if there are no strong objections.
- Django Pocasting can optionally provide the ability to announce new episodes on twitter. Install python-twitter to get started.
- Embeddable Media
- Want to display Youtube, Vimeo or Soundcloud content on Episode detail pages? Django Pocasting provides the ability to link to external embeddable media via the podcasting.models.EmbedMedia class. Optionally install django-embed-video for easy embedding of YouTube and Vimeo videos and music from SoundCloud.
Usage¶
There has yet to be a need to configure anything via the
settings.py file and the included templates and forms should be
enough to get started. One area that may be somewhat difficult is
connecting with a commenting application. For the simplest option,
take a look at django-disqus.
Future¶
For the 0.9.x series I’d like to first see if others have interest in this application and fix any issues discovered with the current version.
If there is desire, video support after a 1.0 (audio only) version has been released is possible. | https://django-podcasting.readthedocs.io/en/stable/ | 2021-06-12T17:45:01 | CC-MAIN-2021-25 | 1623487586239.2 | [] | django-podcasting.readthedocs.io |
If your website uses Cloudflare, you might need to explicitly allow access for our cookie-scanner. This guide explains how to do that from your Cloudflare account.
Legal Monster's cookie-scanner is now an approved "good bot" in Cloudflare's protection framework, so this guide should no longer be needed unless you have a particularly strict custom setup in your Cloudflare firewall rules.
By default, Cloudflare protects your website against automated scans. We are in the process of becoming listed as a known "good bot" with Cloudflare. Until they have completed that process, you might need to add a bypass rule to your Cloudflare account, to prevent their service from blocking our cookie-scanner.
To explicitly allow Legal Monster's cookie-scanner to access your website, you should:
Log in to your Cloudflare account
Select your website, if multiple websites are shown
Click the Firewall menu-item
Click the Firewall Rules tab
Click the Create a Firewall Rule button
Name the new rule, for example "Allow Legal Monster cookie-scanner"
In the "When incoming requests match..." section:
set Field to User Agent
set Operator to contains
in Value, type LegalMonster (exactly as written here; no quotes, no spaces, and capital L and M)
In the "Then..." section:
set Choose an action to Bypass
set Choose a feature to Browser Integrity Check
Click the Save button
Our cookie-scanner should now be able to scan your site. You can see an screenshot of a correctly configured rule here: | https://docs.legalmonster.com/integrations/cloudflare | 2021-06-12T17:56:10 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.legalmonster.com |
This section explains how to review the CVP - Shaker network tests trace logs and results from the Jenkins web UI.
To review the CVP Shaker test results:
Log in to the Jenkins web UI.
Navigate to the build that you want to review.
Find the
shaker-report.html at the top of the Build page.
Download the report and open it.
Click on the scenario of concern for details. A scenario name corresponds
to the scenario file path in the
cvp-shaker source repository.
For example, the
OpenStack L3 East-West scenario name corresponds to
cvp-shaker/scenarios/essential/l3/full_l3_east_west.yaml
Review the performance graphs or errors that appeared during the testing.
For example:
(Optional) To view the log messages produced during the testing by Shaker,
inspect
shaker.log at the bottom of the Build page. | https://docs.mirantis.com/mcp/q4-18/mcp-operations-guide/cvp/cvp-shaker/review-cvp-shaker-test-results.html | 2021-06-12T18:15:05 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.mirantis.com |
2.5.5.6.5 Example: Apply multiple constraints to a single document-type shell
You can apply multiple constraints to a single document-type shell. However, there can be only one constraint for a given element or domain.
Here is a list of constraint modules and what they do:
All of these constraints can be integrated into a single document-type shell for
<topic>, since they constrain distinct element types and domains.
The constraint for the highlighting domain must be integrated before the "DOMAIN ENTITIES"
section, but the order in which the other three constraints are listed does not matter.
Each constraint module provides a unique contribution to the
@domains
attribute. When integrated into the document-type shell for
<topic>,
the effective value of the domains attribute will include the following values, as well as
values for any other modules that are integrated into the document-type shell:
(topic basic-Topic-c) (topic idRequired-section-c) (topic hi-d basic-HighlightingDomain-c) (topic noPh-ph-c) | https://docs.oasis-open.org/dita/dita/v1.3/errata02/os/complete/part2-tech-content/archSpec/base/example-contraints-apply-multiple-constraints.html | 2021-06-12T18:00:31 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.oasis-open.org |
Overview¶
PHC supports both domain and process ontologies. Domain ontologies are complex, multi-relational knowledge domains, such as a tree of complex cancer information. Process ontologies are simple lists, such as codes and definitions.
Ontologies are obtained through the ontologies marketplace and can be shared at the project, account, or public level. Two public ontologies are available in PHC by default, the SNOMED CT Body Structures and the All LOINC Codes. For more information on the SNOMED CT Body Structures, see the FHIR Value Set. For more information on the LOINC codes, see the LOINC Document Ontology. To access these ontologies in the ontologies marketplace, complete the Load and Use the Default Ontologies procedure.
In addition to loading an ontology from marketplace, users can create a custom ontology, which is a subset of an existing process ontology. The custom ontology is available as a filter option in Subjects and can make your work easier. For example, the complete LOINC codes ontology available in PHC contains close to 100,000 codes. A PHC user may commonly use only twenty of these codes for their specific project. Creating a custom ontology of the twenty needed codes makes selecting codes much more efficient. You can also add or subtract concepts from a custom ontology as your needs change. For more information, see the Create a Custom Ontology procedure.
For more complex ontology creation, check out the LifeOmic sponsored open source project TermLink. It provides an engine for transforming open source terminologies, like SNOMED CT, LOINC, and RxNorm.
To learn more about working with ontologies using the API and the CLI, see the LifeOmic Ontologies Service and the OCR API Reference.
Access Control
Working with ontologies requires the Administration>Account (ABAC
accountAdmin ) permission and all Data Access permissions, except Read Masked Data. To grant privileges, see Access Control.
Load and Use the Default Ontologies¶
- On the left side menu, click Ontologies and click Marketplace.
- Click the Process Ontologies tab and the Public checkbox.
- Mouse over the row for the process ontology to reveal the menu and click the icon to add the ontology to your project.
- Click Confirm on the Add ontology to project? dialog.
- To confirm the addition of the ontology, click the Subjects tab on the left side menu.
- Click the Conditions filter and the Browse Ontologies icon.
- Click the Process Ontologies tab to confirm the ontology is loaded and click the ontology row to view the ontology concepts.
- To filter subjects with a concept, click a box to select an ontology concept and click Filter Concepts. To create a permanent filter from your concepts, click Save on the Subjects screen.
Create a Custom Ontology¶
Note: This procedure creates a custom process ontology, but you can follow the same basic steps under the Domain Ontologies tab to create a custom domain ontology.
- On the left side menu, click Ontologies.
- Click the + Create Process Ontology button.
- On the Domain Ontology form that appears, fill out the relevant fields and click Submit.
- On the Managed Ontologies screen, click + Add Concepts.
- On the Browse Ontology screen, click the parent ontology you want. If no ontologies are available complete the Load and Use the Default Ontologies procedure.
- On the Browse Ontology screen, search and scroll to find the concepts you want to include and click the concept checkboxes.
- Click Add Concepts.
- Click the Marketplace tab on the left side of the screen.
- Click the Process Ontologies tab and mouse over the ontology row to display and click the Add to project icon.
- Click Confirm on the confirmation dialog that appears.
- To confirm the addition of the custom ontology, click the Subjects tab on the left side menu.
- Click the Conditions filter and the Browse Ontologies icon.
- Click the Process Ontologies tab to confirm the custom ontology is loaded and click the ontology row to view the ontology concepts.
- To use a concept as a subjects filter, click a box to select an ontology concept and click Filter Concepts.
Browse the Graph and Tree Views for a Domain Ontology¶
- On the left side menu, click Ontologies.
- Under the Domain Ontologies tab, click an ontology to open it and view the concepts that make up the ontology.
- Mouse over an ontology concept row to display the View Graph and View Tree icons.
- To see a visual representation of the concept's relationships, click the View Graph icon.
- On the Concept Graph screen, click the concept to reveal the parent and children concepts. Click additional concepts to further explore the ontology.
- To see an outline view of the concept's relationships, click the View Tree icon.
- Click on any toggle symbols to reveal additional information. | https://docs.us.lifeomic.com/user-guides/ontologies/ | 2021-06-12T18:05:48 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.us.lifeomic.com |
UTXOs include the following properties.
Property
Description
Ether
The amount of ETH
Public Key
A Babyjubjub point of the note owner
Salt
Random salt. Zkopru generates UTXO hash using this salt.
Token Address (Optional)
The address of the token contract when it includes ERC20 or ERC721. The default value is 0.
ERC20 Amount (Optional)
The amount of the ERC20 token when it includes ERC20. The default value is 0.
NFT Id (Optional)
The id of the ERC721 token when it includes ERC721. The default value is 0.
And then Zkopru computes the leaf hash with Poseidon hash:
var intermediate_hash = poseidon(ether, pub_key.x, pub_key.y, salt)var result_hash = poseidon(intemediate_hash, token_address, erc20, nft) | https://docs.zkopru.network/how-it-works/utxo | 2021-06-12T18:12:21 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.zkopru.network |
Base Condition
This block category comprises four block types that are used to determine a graph's executive flow (which yellow connection will fire next). They are:
Boolean Branch,
Decimal Branch,
Integer Branch, and
String Branch.
Boolean Branchblock takes a boolean value as input, and fires one of two yellow outputs depending on the value of the boolean. A
Decimal Branchblock takes two decimals as input, and has different yellow outputs for all the following scenarios: the first number is greater than the second, the first number is greater than or equal to the second, the two numbers are equal, the first number is less than or equal to the second, and the first number is less than the second. An
Integer Branchblock works the same way as a
Decimal Branchblock, except that the two input numbers must be integers rather than decimals. A
String Branchblock takes two strings as input, and fires one of two yellow outputs depending upon whether the two strings are identical or not. | https://docs.graphlinq.io/blockTypes/14-baseCondition/ | 2021-06-12T17:44:29 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.graphlinq.io |
Reforming government surveillance
Posted by Brad Smith
General Counsel & Executive Vice President, Legal & Corporate Affairs, Microsoft. . | https://docs.microsoft.com/en-us/archive/blogs/microsoft_on_the_issues/reforming-government-surveillance | 2021-06-12T18:45:43 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.microsoft.com |
Explorer Guide: Pin that chart!
This applies tov2.25
After you run a query and like the results you’re seeing, use the Pin to Dashboard button in the upper-right corner to save a chart to a dashboard. This lets you save the chart with a meaningful name and be able to come back to the query later and share it with others (if you pin it to a shared dashboard).
What’s next?
Now you know how the basics about building a query in Interana. You’ve learned the basics of filters and how to use them, and the what, why, and how of named expressions in Interana.
And now that we’ve talked about queries, named expressions, and come back to dashboards, you’re ready to start building queries and sharing your results. | https://docs.scuba.io/2/Guides/Explorer_Guide/4_Pin_that_chart! | 2021-06-12T16:52:36 | CC-MAIN-2021-25 | 1623487586239.2 | [array(['https://docs.scuba.io/@api/deki/files/2679/pin_chart_to_dashboard.png?revision=1',
None], dtype=object) ] | docs.scuba.io |
vCenter HA uses SSH keys for password-less authentication between the Active, Passive, and Witness nodes. The authentication is used for heartbeat exchange and file and data replication. To replace the SSH keys in the nodes of a vCenter HA cluster, you disable the cluster, generate new SSH keys on the Active node, transfer the keys to the passive node, and enable the cluster.
Procedure
- Edit the cluster and change the mode to Disabled.
- Log in to the Active node by using the Virtual Machine Console or SSH.
- Enable the bash shell.
bash
- Run the following command to generate new SSH keys on the Active node.
/usr/lib/vmware-vcha/scripts/resetSshKeys.py
- Use SCP to copy the keys to the Passive node and Witness node.
scp /vcha/.ssh/*
- Edit the cluster configuration and set the vCenter HA cluster to Enabled. | https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.avail.doc/GUID-B8E590BA-ACF4-48A1-8644-E492D2241031.html | 2021-06-12T18:43:09 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.vmware.com |
Distributed System and Cache Configuration
To work with your Apache Geode applications, you use a combination of configuration files and application code.
Distributed System Members
Distributed system members are programs that connect to a Geode distributed system. You configure members to belong to a single distributed system, and you can optionally configure them to be clients or servers to members in other distributed systems, and to communicate with other distributed systems.
Geode provides a default distributed system configuration for out-of-the-box systems. To use non-default configurations and to fine-tune your member communication, you can use a mix of various options to customize your distributed system. | https://gemfire.docs.pivotal.io/90/geode/basic_config/config_concepts/chapter_overview.html | 2021-06-12T17:46:26 | CC-MAIN-2021-25 | 1623487586239.2 | [] | gemfire.docs.pivotal.io |
cfme.test_requirements module¶
Test requirements mapping
This module contains predefined pytest markers for MIQ/CFME product requirements.
Please import the module instead of elements:
from cfme import test_requirements pytestmark = [test_requirements.alert] @test_requirements.quota def test_quota_alert(): pass
The markers can have metadata kwargs assigned to them These fields will be parsed for dump2polarion export, and will populate the work items in polarion
The first argument to the marker definition is the title of the requirement. The assigned name of the marker object is equivalent to the ‘short name’ of the requirement.
Example
Included for quick reference, but potentially inaccurate See dump2polarion.exporters.requirements_exporter for up-to-date information Above module is converting pythonic keys (assignee_id) into polarion compatible keys (‘assignee-id’) Supported requirement metadata fields (and example data) | https://cfme-tests.readthedocs.io/en/stable/modules/cfme/cfme.test_requirements.html | 2021-06-12T16:52:50 | CC-MAIN-2021-25 | 1623487586239.2 | [] | cfme-tests.readthedocs.io |
We describe GovBlocks as an open, permissionless decision protocol that empowers dApps to define and operate any governance model at scale. It enables dApp developers to define, incentivize and manage their stakeholders and processes.
GovBlocks is built as a multifactorial governance framework that enables 1) configurable governance models for blockchain applications and 2) creation of a new class of management tools, classified as Decentralized Resource Planning tools for managing resources of a decentralized community.
It provides an enhanced and dynamic governance layer, capable of managing a large, varied and constantly growing range of factors of blockchain networks.
More details can be found in the Whitepaper
GovBlocks architecture is comprised of:
Decision protocol written in solidity with key features of configurable member roles, reputation parameters, financial stake locking, incentives, optimized on-chain voting and decision implementation. dApps can use this protocol to configure governance models on a sliding-scale moving from loosely coupled to tightly coupled governance.
v0.7 of the protocol is available at
Javascript wrapper around GovBlocks Protocol that can be used by businesses to move their decision making (and governance) on blockchain, while keeping the rest of the business off-chain.
Currently in v0.7 is available as a npm package - govblocksjs - 1.1.2.
Applications that use either the GovBlocks protocol or javascript wrapper.
v0.7 released on Kovan Test net -
While this document will act a guide for anyone wishing to define their governance rules using GovBlocks, here is a list of resources for further reading and getting involved with the community:
Ish's post on governance (a 4 min read that gives some insight into our thinking) | https://docs.govblocks.io/wiki/ | 2021-06-12T16:52:37 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.govblocks.io |
Designing Gradle plugins
For beginners to Gradle implementing plugins can look like a daunting task that includes many considerations and deep knowledge: organizing and structuring plugin logic, testing and debugging plugin code as well as publishing the plugin artifact to a repository for consumption.
In this section, you will learn how to properly design Gradle plugins based on established practices and apply them to your own projects. This section assumes you have:
Basic understanding of software engineering practices
Knowledge of Gradle fundamentals like project organization, task creation and configuration as well as the Gradle build lifecycle
Architecture
Reusable logic should be written as binary plugin
The Gradle User Manual differentiates two types of plugins: script plugins and binary plugins. Script plugins are basically just plain old Gradle build scripts with a different name. While script plugins have their place for organizing build logic in a Gradle project, it’s hard to keep them well-maintained, they are hard to test and you can’t define new reusable types in them.
Binary plugins should be used whenever logic needs to be reused or shared across independent projects. They allow for properly structuring code into classes and packages, are cachable, can follow a versioning scheme to enable smooth upgrade procedures and are easily testable.
Consider the impact on performance
As a developer of Gradle plugins you have full freedom in defining and organizing code. Any logic imaginable can be implemented. When designing Gradle plugins always be aware of the impact on the end user. Seemingly simple logic can have a considerable impact on the execution performance of a build. That’s especially the case when code of a plugin is executed during the configuration phase of the build lifecycle e.g. resolving dependencies by iterating over them, making HTTP calls or writing to files. The section on optimizing Gradle build performance will give you additional code examples, pitfalls and recommendations.
As you write plugin code ask yourself whether the code shouldn’t rather be run during the execution phase. If you suspect issues with your plugin code, try creating a build scan to identify bottlenecks. The Gradle profiler can help with automating build scan generation and gathering more low-level information.
Convention over configuration
Convention over configuration is a software engineering paradigm that allows a tool or framework to make an attempt at decreasing the number of decisions the user has to make without losing its flexibility. What does that mean for Gradle plugins? Gradle plugins can provide users with sensible defaults and standards (conventions) in a certain context. Let’s take the Java plugin as an example.
It defines the directory src/main/java as the default source directory for compilation.
The output directory for compiled source code and other artifacts (like the JAR file) is build.
As long as the user of the plugin does not prefer to use other conventions, no additional configuration is needed in the consuming build script. It simply works out-of-the-box. However, if the user prefers other standards, then the default conventions can be reconfigured. You get the best of both worlds.
In practice you will find that most users are comfortable with the default conventions until there’s a good reason to change them e.g. if you have to work with a legacy project. When writing your own plugins, make sure that you pick sensible defaults. You can find out if you did pick sensible conventions for your plugin if you see that the majority of plugin consumers don’t have to reconfigure them.
Let’s have a look at an example for conventions introduced by a plugin. The plugin retrieves information from a server by making HTTP calls. The default URL used by the plugin is configured to point to a server within an organization developing the plugin:. A good way to make the default URL configurable is to introduce an extension. An extension exposes a custom DSL for capturing user input that influences the runtime behavior. The following example shows such a custom DSL for the discussed example:
plugins { id 'org.myorg.server' } server { url = '' }
plugins { id("org.myorg.server") } server { url.set("") }
As you can see, the user only declares the "what" - the server the plugin should reach out to. The actual inner workings - the "how" - is completely hidden from the end user.
Capabilities vs. conventions
The functionality brought in by a plugin can be extremely powerful but also very opinionated. That’s especially the case if a plugin predefines tasks and conventions that a project inherits automatically when applying it. Sometimes the reality that you - as plugin developer - choose for your users might simply look different than expected. For that very reason you need to make a plugin as flexible and configurable as possible.
One way to provide these quality criteria is to separate capabilities from conventions. In practice that means separating general-purpose functionality from pre-configured, opinionated functionality. Let’s have a look at an example to explain this seemingly abstract concept. There are two Gradle core plugins that demonstrate the concept perfectly: the Java Base plugin and the Java plugin.
The Java Base plugin just provided un-opinionated functionality and general purpose concepts. For example it formalized the concept of a SourceSet and introduces dependency management configurations. However, it doesn’t actually create tasks you’d use as a Java developer on a regular basis nor does it create instances of source set.
The Java plugin applies the Java Base plugin internally and inherits all its functionality. On top, it creates source set instances like
mainand
test, creates tasks well-known to Java developers like
classes,
jaror
javadoc. It also establishes a lifecycle between those tasks that make sense for the domain.
The bottom line is that we separated capabilities from conventions. If a user decides that he doesn’t like the tasks created or doesn’t want to reconfigure a lot of the conventions because that’s not how the project structure looks like, then he can just fall back to applying the Java Base plugin and take matters into his own hands.
You should consider using the same technique when designing your own plugins. You can develop both plugins within the same project and ship their compiled classes and identifiers with the same binary artifact. The following code example shows how to apply a plugin from another one, so-called plugin composition:
import org.gradle.api.Plugin; import org.gradle.api.Project; public class MyBasePlugin implements Plugin<Project> { public void apply(Project project) { // define capabilities } }
import org.gradle.api.Plugin; import org.gradle.api.Project; public class MyPlugin implements Plugin<Project> { public void apply(Project project) { project.getPlugins().apply(MyBasePlugin.class); // define conventions } }
For inspiration, here are two open-source plugins that apply the concept:
Technologies
Prefer using a statically-typed language to implement a plugin
Gradle doesn’t take a stance on the programming language you should choose for implementing a plugin. It’s a developer’s choice as long as the plugin binary can be executed on the JVM.
It is recommended to use a statically-typed language like Java or Kotlin for implementing plugins to decrease the likelihood of binary incompatibilities. Should you decide on using Groovy for your plugin implementation then it is a good choice to use the annotation @groovy.transform.CompileStatic.
Restricting the plugin implementation to Gradle’s public API
To be able to build a Gradle plugin you’ll need to tell your project to use a compile-time dependency on the Gradle API. Your build script would usually contain the following declaration:
dependencies { implementation gradleApi() }
dependencies { implementation(gradleApi()) }
It’s important to understand that this dependency includes the full Gradle runtime. For historical reasons, public and internal Gradle API have not been separated yet.
To ensure the best backward and forward compatibility with other Gradle versions you should only use the public API. In most cases it will support the use case you are trying to support with your plugin. Keep in mind that internal APIs are subject to change and can easily break your plugin from one Gradle version to another. Please open an issue on GitHub if you are looking for a public API that is currently internal-only.
How do you know if a class is part of the public API? If you can find the class referenced in the DSL guide or the Javadocs then you can safely assume that it is public. In the future, we are planning to clearly separate public from internal API which will allow end users to declare the relevant dependency in the build script.
Minimizing the use of external libraries
As application developers we have become quite accustomed to the use of external libraries to avoid having to write fundamental functionality.
You likely do not want to go without your beloved Guava or HttpClient library anymore.
Keep in mind that some of the libraries might pull in a huge graph of transitive dependencies when declared through Gradle’s dependency management system.
The dependency report does not render dependencies declared for the
classpath configuration of the build script, effectively the classpath of the declared plugins and their transitive dependencies.
However, you can call the help task
buildEnvironment to render the full dependency graph.
To demonstrate the functionality let’s assume the following build script:
plugins { id 'org.asciidoctor.jvm.convert' version '3.2.0' }
plugins { id("org.asciidoctor.jvm.convert") version "3.2.0" }
The output of the task clearly indicates the classpath of the
classpath configuration:
$ gradle buildEnvironment > Task :buildEnvironment ------------------------------------------------------------ Root project 'external-libraries' ------------------------------------------------------------ classpath \--- org.asciidoctor.jvm.convert:org.asciidoctor.jvm.convert.gradle.plugin:3.2.0 \--- org.asciidoctor:asciidoctor-gradle-jvm:3.2.0 +--- org.ysb33r.gradle:grolifant:0.16.1 | \--- org.tukaani:xz:1.6 \--- org.asciidoctor:asciidoctor-gradle-base:3.2.0 \--- org.ysb33r.gradle:grolifant:0.16.1 (*) (*) - dependencies omitted (listed previously) A web-based, searchable dependency report is available by adding the --scan option. BUILD SUCCESSFUL in 0s 1 actionable task: 1 executed
It’s important to understand that a Gradle plugin does not run in its own, isolated classloader. In turn those dependencies might conflict with other versions of the same library being resolved from other plugins and might lead to unexpected runtime behavior. When writing Gradle plugins consider if you really need a specific library or if you could just implement a simple method yourself. A future version of Gradle will introduce proper classpath isolation for plugins. | https://docs.gradle.org/current/userguide/designing_gradle_plugins.html | 2021-06-12T17:27:02 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.gradle.org |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.