content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Table of Contents
Product Index
A variety of vegetables for your scenes. Set comes with bell peppers, carrots, onions, pickles, potatoes and tomatoes and two bowls of veggies and a stack of carrots for ease of placement.
Use them in your kitchens, your living rooms, have your characters eat them. Versatile and useful for any occasion where veggies might be. | http://docs.daz3d.com/doku.php/public/read_me/index/19207/start | 2020-05-25T02:58:58 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.daz3d.com |
AREDN® Overview¶
The AREDN® acronym stands for “Amateur Radio Emergency Data Network” and it provides a way for Amateur Radio operators to create high-speed ad hoc Data Networks for use in Emergency and service-oriented communications.
For many years amateur radio operators and their served agencies have relied on voice transmissions for emergency or event communications. A typical message-passing scenario involved conveying the message to a radio operator who would write or type it onto a standard ICS-213 form. The message would then be relayed by radio to another operator who would write or type it on another ICS-213 form at the receiving end. The form would typically be hand-delivered to the recipient who would read and sign the form. Any acknowledgement or reply would then be handled through the same process from the receiving end back to the originator.
This tried-and-true scenario has worked well, and it continues to work for handling much emergency and event traffic. Today, however, digital transmission is more commonly used instead of traditional methods and procedures. The hardcopy ICS-213 form is giving way to the Winlink electronic form, with messages being passed using digital technologies such as AX.25 packet, HF Pactor, Fldigi, and others.
In today’s high-tech society people have become accustomed to different ways of handling their communication needs. The preferred methods involve short messaging and keyboard-to-keyboard communication, along with audio-video communication using Voice over IP (VoIP) and streaming technologies.
The amateur radio community is able to meet these high-bandwidth digital communication requirements by using FCC Part 97 amateur radio frequency bands to send digital data between devices which are linked with each other to form a self-healing, fault-tolerant data network. Some have described this as an amateur radio version of the Internet. Although it is not intended for connecting people to the Internet, an AREDN® mesh network will provide typical Internet or intranet-type applications to people who need to communicate across a wide area during an emergency or community event.
An AREDN® network is able to serve as the transport mechanism for the preferred applications people rely upon to communicate with each other in the normal course of their business and social interactions, including email, chat, phone service, document sharing, video conferencing, and many other useful programs. Depending on the characteristics of the AREDN® implementation, this digital data network can operate at near-Internet speeds with many miles between network nodes.
The primary goal of the AREDN® project is to empower licensed amateur radio operators to quickly and easily deploy high-speed data networks when and where they might be needed, as a service both to the hobby and the community. This is especially important in cases when traditional “utility” services (electricity, phone lines, or Internet services) become unavailable. In those cases an off-grid amateur radio emergency data network may be a lifeline for communities impacted by a local disaster. | https://arednmesh.readthedocs.io/en/latest/arednGettingStarted/aredn_overview.html | 2020-05-25T02:08:48 | CC-MAIN-2020-24 | 1590347387155.10 | [] | arednmesh.readthedocs.io |
Datastore 2.3.0 User Guide Save PDF Selected topic Selected topic and subtopics All content Manage privileges in Datastore Overview PassPort manages and controls user accounts and privileges for the Datastore platform. Datastore connects to PassPort to control: User authentication User rights on Datastore resources:Collections and ObjectsFoldersEditorsSpecific functions such as edit, status changes, purge and so on If the authentication and access management with PassPort has been activated for the product instance, you can use PassPort user interface to define: Users and group of users Roles that define a list of privileges. A user or a group of users is associated to one or several roles. Privileges that authorizes actions on resources when conditions are verified. About roles A role groups several privileges and roles, so that they can be granted to and revoked from users simultaneously. A role must be enabled for a user before it can be used by the user. About privileges A privilege is a right to execute a set of actions on a Datastore resource when some conditions are met. Before you start creating privileges, you must have previously published in the Repository the Datastore resources and their configuration as well as the actions and properties used in the conditions. About Datastore resources Datastore resources for Axway Designer are: Object Types Collection Types Folders Editors Statuses Administration actions Datastore resources for Datastore Client are Folder and Queries. Resources are not statically defined once and for all. If you create a new Collection Type that defines a Domain property, you may want to register permissions using this Domain property. For instance, a user group with Role1 will be authorized to access the collection from "domain1" while other groups will not. About administration resources The user access to the Administration resources can be configured and restricted in PassPort. The default CSD exposes an administrator role which has all authorizations in order to be able to update the configurations and view the flows executions: Actions: VIEW - View the details of the Administration resources. UPDATE - Create, modify, delete resources. Properties: TOPIC with possible values: Components Applications Flows Administration The business user role is authorized to view events status and reports generated by the transformation Related topics Create a privilege Assign a privilege to a role Create specific resources Use the predefined Designer resources Using the predefined Datastore Client resources Use the predefined roles Related Links | https://docs.axway.com/bundle/Datastore_230_UserGuide_allOS_en_HTML5/page/Content/UserGuide/Datastore/Concepts/User_rights/About_user_rights.htm | 2020-05-25T01:35:26 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.axway.com |
Overview & Server Requirements Novice Novice tutorials require no prior knowledge of any specific web programming language.
This theme is compatible with WooCommerce (one of the most popular WordPress eCommerce plugins). What this means is that we've created and stylize all the necessary page templates that the plugin uses:
- Product listing
- Product detail page
- Account
- Cart
If you choose to start with one of our demos the Woo Commerce plugin will be installed by default. If you choose to start from scratch and not install one of our demos, you'll need to install WooCommerce<< | http://docs.themefuse.com/haven/your-theme/shopping/overview-server-requirements-the-core | 2019-08-17T11:00:17 | CC-MAIN-2019-35 | 1566027312128.3 | [array(['http://docs.themefuse.com/media/32/woo-1.jpg', None], dtype=object)] | docs.themefuse.com |
1.3.1 Release Notes
Bug Fixes
Issue querying HBase tables from Hive
Querying HBase tables from Hive in Dremio would fail in some cases. This is now fixed. Users need to include
hbase-site.xml from HBase to Dremio's
/conf location.
Improved error messages when working with Hive
Errors when reading data from Hive sources will now include more context.
Issue with data type changes when reading partitioned Hive tables
Changes in the data types for Hive tables would sometimes result in failed queries. Dremio now better handles different schemas across partitions.
1.3.0 Release Notes
Enhancements
Acceleration
Improved reflection profiles Query profiles now include more detailed information about reflections such as names of reflections, what reflections were considered, matched and chosen, details for the best cost query plan and canonicalized user query.
Improved reflection matching logic when working with multiple tables
Matching performance and reflection coverage has been increased when querying multiple datasets that have multiple reflections defined.
Execution
Improved memory profiling
Dremio now records more details about memory usage. Information on peak amount of memory across phases per node is now available.
Better thread scheduling when some cores are idle
Dremio now better handles scheduling threads when some of the cores are idle. This option is disabled by default in this release. The
debug.task.on_idle_load_shed flag can be used to enable this option, followed by restarting all the execution nodes.
Performance improvements working with NULLS in Arrow This update reduces the amount of heap churn when interacting with validity vectors for all data types and provides better performance working with NULL values.
Ability to download Parquet in Dremio UI
Datasets can now be downloaded as Parquet files, which will preserve all type information. This option respects the 1,000,000 row system-wide download limit.
Support for byte-order-marks (BOM) for text files
BOM are now recognized when reading text files.
Coordination and metadata
Tableau for Mac support
Adds support for Tableau on Mac with Dremio ODBC Connector. Requires Tableau 10.4 or higher and Dremio Connector 1.3.14 or higher installed on the machine.
Metadata store maintenance utility
The
dremio-admin utility now has a
clean action that can be used to compact the metadata store, delete orphan objects, delete jobs based on age and reindex the data.
Web Application
Improvements to Job information Job information will now automatically refresh. New queries will also give detailed information about which Data Reflections were used, and which were not used.
Safari Support (experimental)
Dremio now supports Safari, starting with Safari 11.
SQL editor improvements
The SQL Editor now shows line numbers, has better insertion of fields, datasets, and functions (including "snippets"; tokenized arguments).
REST API for Sources
Dremio now has a public REST API for managing sources.
Bug Fixes
Acceleration
Windows queries fail if any reflection is chosen
Fixed issue with acceleration when using certain window function patterns.
Reflection field list incorrectly shows fields as having mixed type
Fixed various bugs affecting dataset schema information when working with reflections.
Reflections on datasets from RDBMS sources are immediately marked as expired
Fixed issue where reflections on datasets from RDBMS sources are marked as expired right after creation.
MaterializationTask fails to get the TTL of JDBC queries
Fixed bugs that were preventing reflections on JDBC datasets to be properly refreshed.
Left outer join queries not getting accelerated
Fixed issue where left outer join queries were not getting accelerated with certain query patterns.
Partial raw materializations are not matched when doing a join that requires only available columns
Updated acceleration logic to leverage raw reflections in a larger set of scenarios.
Substitution fails to flatten the array and gives wrong results
Fixed various bugs when using queries with
flatten function against datasets with reflections.
Handle "in-progress" Materialization tasks on startup
If the cluster is restarted while reflection materialization tasks are running, we make sure to mark those materialization as failed. This prevents issues with reflection maintenance after cluster restarts.
Coordination and Metadata
Use of binary collation with SQL Server
Pushdowns with string comparisons in SQL Server are now using a binary collation, consistent with Dremio's own collation.
String data from SQL server is trimmed
String comparisons in SQL server ignore trailing spaces. To have a consistent behavior in Dremio, string data fetched by Dremio from SQL Server is trimmed from trailing spaces in order for comparisons with other systems to be consistent.
Edit original SQL fails after 2 or more transforms applied on virtual dataset
This should now work as expected.
Get error on Exclude when selecting "1970-01-01 00:00:00.000" date & time
Users are now able to select time within 100 ms boundary of Unix epoch.
SPLIT_PART() throws an 'IndexOutOfBoundsException'
SPLIT_PART() function can now handle multiple parts.
Issue with different metadata refresh intervals
Although Dremio has two settings for the refresh rate of names vs. dataset definitions, the name-only refresh was not working as expected for some sources, and Dremio would always update the full dataset definitions. The individual settings are now observed for all sources. Moreover, when a source is added, Dremio only needs to find the dataset names before the UI allows the user to continue. The full set of metadata is refreshed in the background.
JDBC date/time issue
In certain scenarios, date/time values returned to JDBC clients could be off by one. This issue is now fixed.
Execution
Proxy settings for S3 are ignored
Attempting to set up an S3 source through a proxy would fail in Dremio. This behavior is now fixed -- Dremio will correctly propagate all the proxy settings to the S3 client.
Avoid repeated object creation in reading/writing column data
The in-memory data structures in Arrow provide a read-only and write-only view of memory through accessor and mutator interfaces respectively. In our heap analysis, we noticed a bug where the volume of mutator objects was close to 66 million. The reason was that every time we asked for a mutator and accessor, a new object was created on heap upon every call. The fix resolves the problem.
Update default value of max width per node to be average number of cores across all executor nodes
Dremio has an external option “MAX_WIDTH_PER_NODE” to tune the degree of parallelism we use during the execution of a query. The default value of this parameter used to be 70% of the number of cores on a particular node. We now made changes such that default value of this option considers the number of cores across all executor nodes in Dremio cluster.
Null values in Complex data types were not correctly handled by WRITER operator
Dremio’s writer operator was not able to handle NULL values in complex/nested types. The fix resolves the problem.
Reduce heap usage in Parquet reader
The fix changes the code to use extremely lightweight (less heap overhead) and more efficient data structures in the critical path of Parquet reader code. Similar changes were also done for auxiliary structures we use in our implementation of hash join / hash agg operators.
Fix over-allocation of memory in our columnar data structures
In Dremio, all data is nullable so we use auxiliary structure to track NULL or non-NULL nature of cell values in a particular column. The problem was that we were over-allocating (8x) the memory for the auxiliary structure. The fix resolves the problem. | https://docs.dremio.com/release-notes/130-release-notes.html | 2019-08-17T10:41:43 | CC-MAIN-2019-35 | 1566027312128.3 | [] | docs.dremio.com |
[CE] Introducing the extensions
Currently, ClassifiedEngine is supported by these extensions. You can decide to use the theme or not based on your need.
- CE Shop: Transform your site to an E-commerce marketplace
- CE Coupon: Create your discount codes for your package plans
- CE PayPal Express: Allow payment directly on site by integrating PayPal Express payment gateway
- CE Ad Roll: Allow you to display your ads in any websites
- CE Ad Alert : Allow users to subscribe for new ad notification emails
- CE Paymill: Allow payment directly on site by integrating Paymill payment gateway
- CE Stripe: Allow payment directly on site by integrating Stripe payment gateway
- CE Ad Map: Easily add maps to your classified ad site to display your active ads
- CE Custom fields: Add more fields to your ad posting form for more ad data
- CE eBay: Fill your fresh classifieds site in no time by importing ads from eBay.com
- ET Mailing: Help you to send, receive, track and store emails effortlessly. | https://docs.enginethemes.com/article/141-3-01-introducing-the-extensions | 2019-08-17T10:56:27 | CC-MAIN-2019-35 | 1566027312128.3 | [] | docs.enginethemes.com |
<startup> Element
Specifies common language runtime startup information.
<configuration> Element
<startup> Element
<startup useLegacyV2RuntimeActivationPolicy="true|false" > </startup>
Attributes and Elements
The following sections describe attributes, child elements, and parent elements.
Attributes
useLegacyV2RuntimeActivationPolicy Attribute
Child Elements
Parent Elements
Remarks.
Note
Setting the attribute to true prevents CLR version 1.1 or CLR version 2.0 from loading into the same process, effectively disabling the in-process side-by-side feature (see Side-by-Side Execution for COM Interop).
Example>
See Also
Reference
Concepts
Specifying Which Runtime Version to Use
In-Process Side-by-Side Execution
Other Resources
Configuration File Schema for the .NET Framework
Side-by-Side Execution for COM Interop | https://docs.microsoft.com/en-us/previous-versions/dotnet/netframework-4.0/bbx34a2h(v=vs.100) | 2019-08-17T12:44:18 | CC-MAIN-2019-35 | 1566027312128.3 | [] | docs.microsoft.com |
Bravo, like all Minecraft terrain generators, relies heavily on randomness to
generate its terrain. In order to understand some of the design decisions in
the terrain generator, it is required to understand noise and its various
properties.
Noise’s probability distribution is not even, equal, or normal. It is
symmetric about 0, meaning that the absolute value of noise has all of the
same relative probabilities as the entire range of noise.
When binned into a histogram with 100 bins, a few bins become very large.
Philosophy
Core
Enter search terms or a module, class or function name. | https://bravo.readthedocs.io/en/1.5/noise.html | 2019-08-17T10:33:34 | CC-MAIN-2019-35 | 1566027312128.3 | [] | bravo.readthedocs.io |
The real-time API is the primary method for interacting with devices managed by ACA Engine. It uses a WebSocket connection to allow you to build efficient, responsive user interfaces, monitoring systems and other extensions which require live, two-way or asynchronous interaction.
If you are building browser-based experiences we have a pre-built AngularJS client library ready to go:
Otherwise, if you are working with other frameworks, or would like to build your own, read on.
A connection to the real-time API can be established by requesting the
/control/websocket endpoint with valid access token. The method for this will vary depending on the tooling used for your app, but as a simple example in JavaScript this can be achieved by creating a new WebSocket object:
let socket = new WebSocket('wss://aca.example.com/control/websocket?bearer_token=<access token>');
When opened, this will provide a full-duplex stream for communications. | https://docs.acaengine.com/api/ws | 2019-08-17T11:09:53 | CC-MAIN-2019-35 | 1566027312128.3 | [] | docs.acaengine.com |
Upload theme via WordPress
- Step 1 – Navigate to Appearance > Themes.
- Step 2 – Click “Add New” Click “Upload Theme”
- Step 3 – Click “Upload Theme”
- Step 4 – Click “Choose File”
- Step 5 – Navigate to find the “virtue_premium_vx_x_x.zip” file on your computer and click “Install Now” button.
- Step 6 – Activate the newly installed theme.
Troubleshooting Notes:
If when uploading you get this error: “the uploaded file exceeds the upload_max_filesize directive in php.ini”
If when uploading you get this error: “the uploaded file exceeds the upload_max_filesize directive in php.ini”
This means you servers max upload size is smaller then it should be. Most hosts can help you increase this setting and we highly suggest going that route. Else if you can’t increase the max upload size of your php settings then you can upload via FTP (SEE BELOW)
Upload Theme via FTP client
- Step 1 – Log into your hosting space via an FTP software
- Step 2 – Unzip the virtue_premium_vx_x_x.zip file.
- Step 3 – Upload the “virtue_premium” theme folder into wp-content > themes in your wordpress installation
- Step 4 – Activate the newly installed theme. Go to Appearance > Themes and activate the installed theme.
Free FTP Client
Get Filezilla here:
Get Filezilla here: | http://docs.kadencethemes.com/virtue-premium/installation/ | 2019-08-17T10:47:19 | CC-MAIN-2019-35 | 1566027312128.3 | [array(['http://docs.kadencethemes.com/virtue-premium/wp-content/uploads/2016/03/1.1-Add-New-Theme-min-1024x212.png',
'1.1 Add New Theme-min'], dtype=object)
array(['http://docs.kadencethemes.com/virtue-premium/wp-content/uploads/2016/03/1.2-Upload-min-1024x212.png',
'1.2 Upload-min'], dtype=object)
array(['http://docs.kadencethemes.com/virtue-premium/wp-content/uploads/2016/03/1.3-Choose-Image-min-1024x210.png',
'1.3 Choose Image-min'], dtype=object) ] | docs.kadencethemes.com |
Monitoring Channels
Web Safety can store information about each filtered request/response in four possible channels. The following table briefly describes each channel.
- Real Time
- Stores last 10000 monitoring events in memory-only SQLite database for quick access from the UI. Primarily designed to give a short overview of what is being browsed through proxy at the moment.
- Database
- Stores monitoring events in a database, either SQLite (for home deployments) or MySQL / Maria DB (for enterprise deployments). Traffic reports are built based on this database.
- Access Log
- Stores each monitoring event in a file at /opt/websafety/var/log/access.log. This channel is disabled by default.
- Syslog
- Stores each monitoring event in syslog. Syslog may be configured to gather monitoring events from all Web Safety servers in a central log processing system. This channel is disabled by default. | https://docs.diladele.com/administrator_guide_develop/traffic_monitoring/channels.html | 2019-08-17T11:18:04 | CC-MAIN-2019-35 | 1566027312128.3 | [] | docs.diladele.com |
PidTagNextSendAcct Canonical Property
Applies to: Outlook 2013 | Outlook 2016
Specifies the server that a client is currently attempting to use to send email.
Remarks
The format of this property is implementation dependent. This property can be used by the client to determine which server to direct the email to, but is optional and the value has no meaning to the server.
Related resources
Protocol specifications
Provides references to related Exchange Server protocol specifications.
Converts between IETF RFC2445, RFC2446, and RFC2447, and appointment and meeting objects.
Specifies the properties and operations that are permissible for email message objects.
Header files
Mapidefs.h
Provides data type definitions.
Mapitags.h
Contains definitions of properties listed as alternate names.
See also
MAPI Canonical Properties
Mapping Canonical Property Names to MAPI Names
Mapping MAPI Names to Canonical Property Names | https://docs.microsoft.com/en-us/office/client-developer/outlook/mapi/pidtagnextsendacct-canonical-property | 2019-08-17T11:16:03 | CC-MAIN-2019-35 | 1566027312128.3 | [] | docs.microsoft.com |
.
Azure Storage Services REST API Reference
The REST APIs for the Microsoft Azure with version 2009-09-19 and later. These are primarily used for the VHD files backing the AzureVMs.
Append blobs, which are optimized for append operations only. Append blobs are available only with version 2015-02-21 and later..
Queue Service KB for version 2011-08-18 and 8 KB for previous versions.
When a message is read from the queue, the consumer is expected to process the message and then delete it. After the message is read, it is made invisible to other consumers for a specified interval. If the message has not yet been deleted at the time the interval expires, its visibility is restored, so that another consumer may process it.
For more information about the Queue service, see Queue Service REST API.
Table Service
The Table service provides structured storage in the form of tables. The Table service supports a REST API that implements the OData protocol.
Within a storage account, a developer may create.
File Service
The Server Message Block (SMB) protocol is the preferred file share protocol used on-premises
Blob Service REST API Queue Service REST API Table Service REST API File Service REST API | https://docs.microsoft.com/zh-cn/rest/api/storageservices/ | 2019-08-17T12:15:29 | CC-MAIN-2019-35 | 1566027312128.3 | [] | docs.microsoft.com |
You are viewing the RapidMiner Server documentation for version 8.2 - Check here for latest version
What's new in RapidMiner Server 8.2
Real-Time Scoring
RapidMiner has developed a new back-end capability designed for instantaneous decision making. With Real-Time Scoring, you now have a way to predict at large scale, and with very low latency, how your customers behave, when your industrial parts will break, or what kind of risk a decision entails, and you can make that information actionable immediately. The RapidMiner platform already provides two ways to automate predictions:
- Batch use cases: Job scheduling in RapidMiner Server
- Online use (embedding in another application): RapidMiner Server Web Services
Now we add a third and very powerful way: low latency (<25ms), high throughput web services provided by Real-Time Scoring. If there's a need for a fast and accurate answer, this is the right component to use.
Better process monitoring
During the execution of jobs, the Execution Details view displays the status of the process, operator by operator, allowing users to follow the executions and monitor their progress.
Easier process list management
We have added several filters to the process list to make it more practical to use. No matter how long your process list is you can filter based on timeline, queue and duration.
The following pages describe the enhancements and bug fixes in RapidMiner Server 8.2 releases: | https://docs.rapidminer.com/8.2/server/releases/ | 2019-08-17T11:20:43 | CC-MAIN-2019-35 | 1566027312128.3 | [array(['img\\82-operationalization-scoring.png', 'img'], dtype=object)
array(['img\\82-process-execution-details.png', 'img'], dtype=object)
array(['img\\82-process-list-filters.png', 'img'], dtype=object)] | docs.rapidminer.com |
Service Bus authentication and authorization
Applications gain access to Azure Service Bus resources using Shared Access Signature (SAS) token authentication. With SAS, applications present a token to Service Bus that has been signed with a symmetric key known both to the token issuer and Service Bus (hence "shared") and that key is directly associated with a rule granting specific access rights, like the permission to receive/listen or send messages. SAS rules are either configured on the namespace, or directly on entities such as a queue or topic, allowing for fine grained access control.
SAS tokens can either be generated by a Service Bus client directly, or they can be generated by some intermediate token issuing endpoint with which the client interacts. For example, a system may require the client to call an Active Directory authorization protected web service endpoint to prove its identity and system access rights, and the web service then returns the appropriate Service Bus token. This SAS token can be easily generated using the Service Bus token provider included in the Azure SDK.
Important
If you are using Azure Active Directory Access Control (also known as Access Control Service or ACS) with Service Bus, note that the support for this method is now limited and you should migrate your application to use SAS. For more information, see this blog post and this article.
Shared Access Signature authentication
SAS authentication enables you to grant a user access to Service Bus resources, with specific rights. SAS authentication in Service Bus involves the configuration of a cryptographic key with associated rights on a Service Bus resource. Clients can then gain access to that resource by presenting a SAS token, which consists of the resource URI being accessed and an expiry signed with the configured key.
You can configure keys for SAS on a Service Bus namespace. The key applies to all messaging entities within that namespace. You can also configure keys on Service Bus queues and topics. SAS is also supported on Azure Relay.
To use SAS, you can configure a SharedAccessAuthorizationRule object on a namespace, queue, or topic. This rule consists of the following elements:
- KeyName: identifies the rule.
- PrimaryKey: a cryptographic key used to sign/validate SAS tokens.
- SecondaryKey: a cryptographic key used to sign/validate SAS tokens.
- Rights: represents the collection of Listen, Send, or Manage rights granted.
Authorization rules configured at the namespace level can grant access to all entities in a namespace for clients with tokens signed using the corresponding key. You can configure up to 12 such authorization rules on a Service Bus namespace, queue, or topic. By default, a SharedAccessAuthorizationRule with all rights is configured for every namespace when it is first provisioned.
To access an entity, the client requires a SAS token generated using a specific SharedAccessAuthorizationRule. The SAS token is generated using the HMAC-SHA256 of a resource string that consists of the resource URI to which access is claimed, and an expiry with a cryptographic key associated with the authorization rule.
SAS authentication support for Service Bus is included in the Azure .NET SDK versions 2.0 and later. SAS includes support for a SharedAccessAuthorizationRule. All APIs that accept a connection string as a parameter include support for SAS connection strings.
Next steps
- Continue reading Service Bus authentication with Shared Access Signatures for more details about SAS.
- How to migrate from Azure Active Directory Access Control (ACS) to Shared Access Signature authorization.
- Changes To ACS Enabled namespaces.
- For corresponding information about Azure Relay authentication and authorization, see Azure Relay authentication and authorization.
Feedback | https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-authentication-and-authorization | 2019-08-17T11:06:17 | CC-MAIN-2019-35 | 1566027312128.3 | [] | docs.microsoft.com |
vprintf_s, _vprintf_s_l, vwprintf_s, _vwprintf_s_l
Writes formatted output by using a pointer to a list of arguments. These versions of vprintf, _vprintf_l, vwprintf, _vwprintf_l have security enhancements, as described in Security Features in the CRT.
Syntax
int vprintf_s( const char *format, va_list argptr ); int _vprintf_s_l( const char *format, locale_t locale, va_list argptr ); int vwprintf_s( const wchar_t *format, va_list argptr ); int _vwprintf_s_l( const wchar_t *format, locale_t locale, va_list argptr );
Parameters
format
Format specification.
argptr
Pointer to list of arguments.
locale
The locale to use.
For more information, see Format Specifications.
Return Value.
Remarks.
Important
Ensure that format is not a user-defined string. For more information, see Avoiding Buffer Overruns.
Generic-Text Routine Mappings
Requirements
* Required for UNIX V compatibility. | https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/vprintf-s-vprintf-s-l-vwprintf-s-vwprintf-s-l?view=vs-2017 | 2019-08-17T11:12:32 | CC-MAIN-2019-35 | 1566027312128.3 | [] | docs.microsoft.com |
VMware Workstation
The following sections describe how to import and set up a two-leaf/two-spine Cumulus VX topology with VMware Workstation.
These sections assume a basic level of VMware Workstation experience. For detailed instructions, refer to the VMware Workstation documentation.
Create a Cumulus VX VM with VMware Workstation
This section assumes that you have downloaded the Cumulus VX disk image for VMware hypervisors and that VMware Workstation is installed. For more download locations and steps, refer to the Getting Started page.
Open VMware Workstation and click File > Open… to open the virtual machine wizard.
Click the Choose File… button, select the downloaded OVA, then click Open.
In the text box, edit the name of the VM to
CumulusVX-leaf1and assign the directory location to save the imported VM.
By default, the VM is saved in the
~\Documents\Virtual Machines\folder.
Click Import to start the import process. This might take a few seconds.
Click Edit virtual machine settings and configure the network adapter settings:
- Network Adapter (1): NAT
- Network Adapter 2: Host-only (equivalent to Internal Network)
- Network Adapter 3: Host-only (equivalent to Internal Network)
Network Adapter 4: Host-only (equivalent to Internal Network)
Next Steps
This section assumes that you are configuring a two-leaf/two-spine
network topology, that you have completed the steps in
Create a Cumulus VX VM with VMware Workstation
above, and that you now have a VM called
CumulusVX-leaf1.
The two-leaf/two-spine network topology requires four Cumulus VX VMs to be created. Using the Snapshot Manager, clone the virtual machine (Ctrl + M) three times to create three additional VMs, replacing the name
CumulusVX-leaf1with:
CumulusVX-leaf2
CumulusVX-spine1
CumulusVX-spine2
After you have created all four VMs, follow the steps in Create a Two-Leaf, Two-Spine Topology to configure the network interfaces and routing. | https://docs.cumulusnetworks.com/cumulus-vx/Getting-Started/VMware-Workstation/ | 2019-12-05T16:47:43 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.cumulusnetworks.com |
"operjoin" Module
Description
This module allows the server administrator to force server operators to join one or more channels when logging into their server operator account.
Configuration
To load this module use the following
<module> tag:
<module name="m_operjoin.so">
<oper> &
<type>
This module extends the core
<oper> and
<type> tags with the following fields:
Example Usage
Forces Sadie to join #example1 and #example2 when logging into their server operator account:
<oper name="Sadie" ...
Forces server operators of type NetAdmin to join #example1 and #example2 when logging into their server operator account:
<type name="NetAdmin" ...
<operjoin>
The
<operjoin> tag defines settings about how the operjoin module should behave. This tag can only be defined once.
Example Usage
Forces all server operators to join #example when logging into their server operator account:
<operjoin channel="#example" override="yes"> | https://docs.inspircd.org/2/modules/operjoin/ | 2019-12-05T17:36:56 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.inspircd.org |
Remove-SPEnterprise
Search Tenant Schema
Syntax
Remove-SPEnterpriseSearchTenantSchema [-Identity] <TenantSchemaPipeBind> [-AssignmentCollection <SPAssignmentCollection>] [-Confirm] [-SearchApplication <SearchServiceApplicationPipeBind>] [-SiteCollection <Guid>] [-WhatIf] [<CommonParameters>]
Description
This cmdlet removes a search schema. Use this cmdlet to remove an unused or unwanted search schema.
For permissions and the most current information about Windows PowerShell for SharePoint Products, see the online documentation at SharePoint Server Cmdlets.
Examples
------------------EXAMPLE------------------
$ssa = Get-SPEnterpriseSearchServiceApplication [Guid]$guid = "909b84cb-90f2-4a1b-8df4-22547a9b2227" Remove-SPEnterpriseSearchTenantSchema -Identity $guid -SearchApplication $ssa
This example removes the search schema for the tenant with GUID 909b84cb-90f2-4a1b-8df4-22547a9b2227. tenant of the search schema to be removed.
The type must be a valid GUID, in string form, that identifies the tenant in the form 12345678-90ab-cdef-1234-567890bcdefgh.
The tenant GUID can be found in the Search Service Application database, in the folder \Databases\Search_Service_Application\Tables\dbo.MSSTenant.
Specifies the search application that contains the enterprise search schema to be removed.
The type must be a valid search application name (for example, SearchApp1), or an instance of a valid SearchServiceApplication object.
Specifies that the search schema to be removed is within the scope of a site collection (SPSite).
Feedback | https://docs.microsoft.com/en-us/powershell/module/sharepoint-server/Remove-SPEnterpriseSearchTenantSchema?view=sharepoint-ps | 2019-12-05T17:40:08 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.microsoft.com |
Introduction
- Defining your school's scheduling rotation
- Creating courses, their corresponding sections, and study halls
- Building student schedules
Using the Schedule Report Writer, you can also generate and print customized reports, such as schedules and section rosters that include pertinent demographic information.
This guide covers some of the main tasks you'll perform in the Scheduling module. | https://docs.rediker.com/guides/adminplus/scheduling/index.htm | 2019-12-05T17:41:52 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.rediker.com |
Radare2 module for Yara.
Yara Modules
Modules are the way Yara provides for extending its features. They allow to define data structures and functions which can be used in your rules to express more complex conditions. There’re some modules (PE, ELF, Cuckoo, Math, etc.) officially distributed with Yara, but you can also write your own modules.
Radare2
Radare2 is a strong open-source reversing framework that allows -furtherother more functionalities- provides information over executables files that other tools doesn’t have in a direct way and it supports a lot of file format!: ELF, Java Class, Mach-O, COFF, Gameboy, Nintendo Switch bins, SNES roms, WASM, Compiled LUA, PCAP files, etc.
From YaraRules Project cooked this recipe:
Radare2 versatility + Power of Yara = r2.c (Radare2 module for Yara)
And we hope you find it interesting :)
Installation
In the installation section you will found detailed instructions about the r2 installation and Yara configuration + installation.
Use
There’re two ways to use r2.c:
- First way to use r2.c is passing Json report generated with
generate_report.pythat use radare2. This is the quickly way to use with a lot amount samples, because you can generate the report one time and use whenever you want.
$ ./generate_report.py binary > report.json $ yara -x r2=report.json file.yar binary
- Second way is invoking automatically Radare2 from Yara. This method is recommendable to use manually. One of the powers of Yara, the speed, is considerably decreased using this method, but is very userful to quick tests.
$ yara file.yar binary
What radare2 information can be used with the module?
We can write Yara rules with a lot of information from rabin2 and rahash2.
Rabin2 is a powerful tool from radare2 framework to handle binary files, to get information on "imports", "sections", "exports", list archs, headers fields, binary info, libraries, etc.
With Rahash2 we can calculate a checksum with a lot of different Algorithms: md5, sha1, sha256, sha384, sha512, crc16, crc32, md4, xor, xorpair, parity, entropy, hamdist, pcprint, mod255, xxhash...etc.
Feedback and Contribution
Your feedback is highly appreciated!!! Please, If you’re interested in contributing with us, ask a question or sharing your Yara rules with us and Security Community, you can send a message to our Twitter account @YaraRules, or submit a pull request or issue on any of our Github Repository.
Our module is under the GNU-GPLv2 license. It’s open to any user or organization, as long as you use it under this license
Thanks
Thanks for all the people that give us feedback during the development, specially: | https://r2yara.readthedocs.io/en/latest/ | 2019-02-15T21:28:59 | CC-MAIN-2019-09 | 1550247479159.2 | [] | r2yara.readthedocs.io |
InterPlay 2.3.0 Installation Guide Silent File Editor Use the Silent File Editor to modify variables in a silent file. It can be used from the command line or the GUI. The most common values that you replace when preparing a new installation using a silent file are the InstallDir and CommonDir variables. The value of these fields is used to concatenate other paths in the products silent file properties file. Location The Silent File Editor is in the installation directory in Tools/SilentFileEditor. Note It is not supported to copy the Silent File Editor from the installation package because it uses binary files from the Installer. Modifying a silent file using the command line To modify a silent file using the command line, run: In Windows: SilentFileEditor.bat In UNIX: SilentFileEditor.sh The parameters for the Silent File Editor are: The path to the silent file that you want to modify Three arguments in this format:The first argument is the name of the variable that you want to modify (for example, DB_ADMIN_PASSWORD). Each variable name given must exist in the silent file.The second argument is the value that you want to assign to the variable given as the first argument.The third argument is -c if the value is to be encrypted first and then saved in the silent file, or -u if the value does not need to be encrypted. You can have more than one group of arguments as shown in the examples below. Example SilentFileEditor.bat SilentFilePath varName1 value1 -c/-u varName2 value2 -c/-u … varNameN valueN -c/-u Windows example SilentFileEditor.bat C:\<install directory>\SilentFile\Install_Composer_V3.6.0.properties DB_ADMIN_PASSWORD composer -c InstallDir C:\<install directory> -u UNIX example ./SilentFileEditor.sh /<install directory>/SilentFile/Install_Composer_V3.6.0.properties ServerHost item-51923 -u Common new value 1 -c DB_SCRIPTS new value COPY_LAUNCH -u Modifying a Silent File using the user interface Starting the GUI To start the Silent File Editor GUI, run SilentFileEditorGUI.bat or SilentFileEditorGUI.sh at <installation directory>\Tools\SilentFileEditor. Using the GUI The GUI displays the list of variables and values in the silent file. Use File > Open to open the silent file you want to edit. From the Tools menu you can: Encrypt Selected - Encrypts the Values selected with the AES128 algorithm Undo Selected - Undoes the changes made on the current selection Undo all changes - Undoes all changes made on the current selection Replace - Finds a variable and replaces it with the value you select. Inside of the Replace command there are other options:Replace all - Replaces all paths in all the variable valuesFind next - Goes to the next value occurrence and if you click Replace it replaces the valueEncrypt - Encrypts the value in the Replace Value with field Once you have completed all the modifications, use File > Save to save the silent file, then File > Exit to quit the Silent File Editor UI. Related Links | https://docs.axway.com/bundle/InterPlay_230_InstallationGuide_allOS_en_HTML5/page/Content/InstallationGuide/reference/ref_silent_file_editor.htm | 2019-02-15T21:43:08 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.axway.com |
How to send email using Load Users into context Action
- create a cron job
- optionally add a trigger
- add Load Users from SQL action
- use
SELECT * from Users
- add Send Email action
- check “Send mail to all users” option
- run the cron
How to use external databases on Run SQL Query
- create a cron job
- optionally set a trigger
- add Run SQL Query action
- set on “Other Connection String” field the connection string details from the web.config file of the site you want to use. What you are looking for is this:
Data Source=(local);Initial Catalog=test;User ID=myuser;Password=mypassword!
How to schedule tasks for tables created in different database schema
- create a schema
- create a new database table > change the schema to the previously created schema
- create a cron job
- add some Database triggers
- select on Table Name the table previously created on the new schema | https://docs.dnnsharp.com/sharp-scheduler/faq.html | 2019-02-15T20:45:09 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.dnnsharp.com |
API¶
The
dask.delayed interface consists of one function,
delayed:
delayedwraps functions
Wraps functions. Can be used as a decorator, or around function calls directly (i.e.
delayed(foo)(a, b, c)). Outputs from functions wrapped in
delayedare proxy objects of type
Delayedthat contain a graph of all operations done to get to this result.
delayedwraps objects
Wraps objects. Used to create
Delayedproxies directly.
Delayed objects can be thought of as representing a key in the dask task
graph. A
Delayed supports most python operations, each of which creates
another
Delayed representing the result:
- Most operators (
*,
-, and so on)
- Item access and slicing (
a[0])
- Attribute access (
a.size)
- Method calls (
a.index(0))
Operations that aren’t supported include:
- Mutating operators (
a += 1)
- Mutating magics such as
__setitem__/
__setattr__(
a[0] = 1,
a.foo = 1)
- Iteration. (
for i in a: ...)
- Use as a predicate (
if a: ...)
The last two points in particular mean that
Delayed objects cannot be used for
control flow, meaning that no
Delayed can appear in a loop or if statement.
In other words you can’t iterate over a
Delayed object, or use it as part of
a condition in an if statement, but
Delayed object can be used in a body of a loop
or if statement (i.e. the example above is fine, but if
data was a
Delayed
object it wouldn’t be).
Even with this limitation, many workflows can easily be parallelized.
dask.delayed.
delayed()¶
Wraps a function or object to produce a
Delayed.
Delayedobjects act as proxies for the object they wrap, but all operations on them are done lazily by building up a dask graph internally.
Examples
Apply to functions to delay execution:
>>> def inc(x): ... return x + 1
>>> inc(10) 11
>>> x = delayed(inc, pure=True)(10) >>> type(x) == Delayed True >>> x.compute() 11
Can be used as a decorator:
>>> @delayed(pure=True) ... def add(a, b): ... return a + b >>> add(1, 2).compute() 3
delayedalso accepts an optional keyword
pure. If False, then subsequent calls will always produce a different
Delayed. This is useful for non-pure functions (such as
timeor
random).
>>> from random import random >>> out1 = delayed(random, pure=False)() >>> out2 = delayed(random, pure=False)() >>> out1.key == out2.key False
If you know a function is pure (output only depends on the input, with no global state), then you can set
pure=True. This will attempt to apply a consistent name to the output, but will fallback on the same behavior of
pure=Falseif this fails.
>>> @delayed(pure=True) ... def add(a, b): ... return a + b >>> out1 = add(1, 2) >>> out2 = add(1, 2) >>> out1.key == out2.key True
Instead of setting
pureas a property of the callable, you can also set it contextually using the
delayed_puresetting. Note that this influences the call and not the creation of the callable:
>>> import dask >>> @delayed ... def mul(a, b): ... return a * b >>> with dask.config.set(delayed_pure=True): ... print(mul(1, 2).key == mul(1, 2).key) True >>> with dask.config.set(delayed_pure=False): ... print(mul(1, 2).key == mul(1, 2).key) False
The key name of the result of calling a delayed object is determined by hashing the arguments by default. To explicitly set the name, you can use the
dask_key_namekeyword when calling the function:
>>> add(1, 2) Delayed('add-3dce7c56edd1ac2614add714086e950f') >>> add(1, 2, dask_key_name='three') Delayed('three')
Note that objects with the same key name are assumed to have the same result. If you set the names explicitly you should make sure your key names are different for different results.
>>> add(1, 2, dask_key_name='three') >>> add(2, 1, dask_key_name='three') >>> add(2, 2, dask_key_name='four')
delayedcan also be applied to objects to make operations on them lazy:
>>> a = delayed([1, 2, 3]) >>> isinstance(a, Delayed) True >>> a.compute() [1, 2, 3]
The key name of a delayed object is hashed by default if
pure=Trueor is generated randomly if
pure=False(default). To explicitly set the name, you can use the
namekeyword:
>>> a = delayed([1, 2, 3], name='mylist') >>> a Delayed('mylist')
Delayed results act as a proxy to the underlying object. Many operators are supported:
>>> (a + [1, 2]).compute() [1, 2, 3, 1, 2] >>> a[1].compute() 2
Method and attribute access also works:
>>> a.count(2).compute() 1
Note that if a method doesn’t exist, no error will be thrown until runtime:
>>> res = a.not_a_real_method() >>> res.compute() AttributeError("'list' object has no attribute 'not_a_real_method'")
“Magic” methods (e.g. operators and attribute access) are assumed to be pure, meaning that subsequent calls must return the same results. This behavior is not overrideable through the
delayedcall, but can be modified using other ways as described below.
To invoke an impure attribute or operator, you’d need to use it in a delayed function with
pure=False:
>>> class Incrementer(object): ... def __init__(self): ... self._n = 0 ... @property ... def n(self): ... self._n += 1 ... return self._n ... >>> x = delayed(Incrementer()) >>> x.n.key == x.n.key True >>> get_n = delayed(lambda x: x.n, pure=False) >>> get_n(x).key == get_n(x).key False
In contrast, methods are assumed to be impure by default, meaning that subsequent calls may return different results. To assume purity, set pure=True. This allows sharing of any intermediate values.
>>> a.count(2, pure=True).key == a.count(2, pure=True).key True
As with function calls, method calls also respect the global
delayed_puresetting and support the
dask_key_namekeyword:
>>> a.count(2, dask_key_name="count_2") Delayed('count_2') >>> with dask.config.set(delayed_pure=True): ... print(a.count(2).key == a.count(2).key) True | http://docs.dask.org/en/latest/delayed-api.html | 2019-02-15T21:25:36 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.dask.org |
Integration Tests¶
Configuration¶
The main configuration file to look at is
/tests/protractor.conf.js.
It configures our
browserName.
In
browserName we specify the browser that will be used to launch the tests.
It can be set to
phantomjs,
firefox or
chrome.
You can find more information about this in the protractor referenceConf.js documentation.
All spec files should be placed in
/tests/integration/specs and all page
object files should be in
/tests/integration/pages. So, the file organisation
structure is:
tests/ └─ integration/ ├─ specs/ │ ├─ spec.name.js │ └─ spec.another.name.js └─ pages/ ├─ page.name.js └─ page.another.name.js
The specs that will be launched are defined in the
gulpfile.js. They can be
specified using patterns:
return gulp.src([PROJECT_PATH.tests + '/integration/specs/*.js'])
By default all specs inside
/tests/integration/specs folder will be launched.
Coverage¶
Integration coverage is measured by the number of critical path or regression test cases that were automated. Keep in mind that the success of your project does not depend on the tests or the percentage of your code coverage, but it will improve maintenance and give you and other contributors more confidence in the quality of the product you produce. We should aim for the highest possible coverage and quality. | https://aldryn-boilerplate-bootstrap3.readthedocs.io/en/4.0.6/testing/integration_tests.html | 2019-02-15T21:43:49 | CC-MAIN-2019-09 | 1550247479159.2 | [] | aldryn-boilerplate-bootstrap3.readthedocs.io |
(American Embassy, Saigon, South Vietnam, April 30, 1975)
THE SYMPATHIZER is a unique first novel by Viet Thanh Nguyen. The story is told from the perspective of a narrator who allows the reader to delve into the mind of a Vietnamese person experiencing the end of the Vietnam War in the spring of 1975, and the aftermath of the fighting focusing on a possible counter-revolution, and how Hanoi is integrating the south into its political agenda. The narrator highlights the duality that is present throughout the novel. The protagonist’s own lineage is a case study in ethnic diversity as he himself is considered a half-caste or bastard in Vietnamese society. He is the illegitimate son of a teenage Vietnamese mother, and a French Catholic priest. The narrator loves his mother and hates his father, and throughout the novel these feelings are portrayed through a number of poignant vignettes. The book itself is very important because there are few novels about the war that provide a vehicle for the Vietnamese to speak about their experiences and feelings. Nguyen’s effort fills that gap in an emotionally charged novel that alternates between the light and the dark aspects of war.
The narrator’s character fits the duality theme in the sense that it is divided by at least two component parts. First, he is obsessed with guilt as he tries to navigate the demands of being a spy for the north and living in the United States. He is educated in an American university and after the war he is assigned by his handlers to shadow, “the general,” a former commander in the South Vietnamese secret police who has escaped Saigon, and is set up by the CIA in Southern California to organize the retaking of his country. The narrator, a Captain and interrogator in the secret police is living a much better lifestyle than his compatriots who did not escape the North Vietnamese and Viet Cong when Saigon fell. He suffers from tremendous “guilt, dread, and anxiety” concerning his worthiness as compared to his countrymen. Further guilt is evidenced as he repeatedly flashes back to his role in the assassination of the “crapulent major,” who was suspected of spying for the north, and the murder of Sonny, a Vietnamese who began his own newspaper in Southern California that was seen as a threat by the general. In the second component part of the narrator’s personality, we witness his movement away from his “sympathizer mode” and carries on as a revolutionary consumed with his role as a police interrogator following instructions from Man and his Aunt in Paris, both who are his handlers, to provide information as to events and political patterns that are being shaped in the United States. Throughout the novel, the narrator’s role confusion is evident as regrets many of his actions committed during and after the war in the name of revolution. His anguish evolves to the point that he begins to doubt his beliefs and tries to make amends to those he hurt.
(South Vietnamese struggle to board ship in Da Nang to escape North Vietnamese forces, April. 1975)
The texture of the book is evident from the outset as Nguyen describes the horrific scenes that took place in Saigon as the city was about to fall. The description puts the reader outside the American embassy and Saigon airfields as frightened Vietnamese who worked for, and, cooperated with the United States sought to escape before North Vietnamese troops took the city. The narrator returns to his childhood when he, Man, and Bon, three friends become blood brothers for life. As the novel unfolds we follow the relationship between the three that is rather complex since Man becomes a Commissar for the north, Bon is a soldier in the South Vietnamese army, and the narrator suffers from the duality of being a spy for the north and a police interrogator for the south.
Many important themes are developed in the novel. The conflict between east and west or occidental and the orient are deeply explored in the dialogue between the characters. The moral dilemma of what is right and wrong in our daily actions hovers over each page, and how a person tries to cope with their own divided heart. The author’s sarcasm is at times humorous, but also very disturbing as the narrator tries to understand the history of his country and the demands it makes upon him. The history of the war is explored in the context of certain important decisions by the United States, the Hanoi government, and the remnants of the Saigon regime. Nguyen descriptions are intense and very pointed, i.e., as the narrator explores who invented the concept of the “Eurasian;” he states “that claim belongs to the English in India who found it impossible not to nibble on dark chocolate. Like pith-helmeted Anglos, the American Expeditionary Forces in the Pacific could not resist the temptations of the locals. They, too, fabricated a portmanteau word to describe my kind, the Amerasian..” (19-20)
The author creates a number of interesting and complex characters that carry the storyline nicely. The right wing Congressman from Orange County, California who wants to fund and train the South Vietnamese counter-revolution, the Hollywood producer who is making his own movie version combining the Green Berets and Apocalypse Now, the northern commandant who tries to purify the revolution through the reeducation of those who have gone astray, and many others. The narrator’s plight is very important as he tries to integrate his memories of his country in a heartfelt manner throughout the novel. Whether he discusses Vietnamese geography, culture, or his family and friends, he seems adrift when in America, and then adrift again, when he returns to Vietnam.
The book is a triumph as a first novel, but at times it can be very dark. I suppose that is acceptable based on the historical background of the war and the story it tells. It is a unique approach to trying to understand a war that ended over forty years ago, but that had been fought since the late 19th century when the French first imposed their colonial regime. The history of the war and the scenes that are presented seem authentic and should satisfy those interested in the literature of the war and how people tried to cope and survive the trauma it caused.
One thought on “THE SYMPATHIZER by Viet Thanh Nguyen”
I have checked your blog and i have found some duplicate content, that’s why you don’t rank high in google’s search results, but there is a
tool that can help you to create 100% unique articles, search
for; boorfe’s tips unlimited content | https://docs-books.com/2015/05/30/the-sympathizer-by-viet-thanh-nguyen/ | 2019-02-15T21:36:31 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs-books.com |
What
Verifies the signature on a JWT received from clients or other systems. This policy also extracts the claims into context variables so that subsequent policies or conditions can examine those values to make authorization or routing decisions.>
Verify a JWT signed with the RS256 algorithm
This example policy verifies a JWT that was signed with the RS256 algorithm. For signing, a private key must be provided, and to verify, you need to provide the corresponding public key.
See the Element reference for details on the requirements and options for each element in this sample policy.
<VerifyJWT name="JWT-Verify-RS256"> <Algorithm>RS256</Algorithm> <Source>json|RS256</Algorithm>
Specifies the encryption algorithm to sign the token. RS256 employs a public/secret key pair, while HS256 employs a shared secret. See also About signature encryption algorithms.
<Audience>
<Audience>audience-here</Audience>.
<PublicKey/JWKS>
<PublicKey> <JWKS ref="public.jwks"/> </PublicKey> or <PublicKey> <JWKS>jwks-value-here</JWKS> </PublicKey>
Specifies a value in JWKS format (RFC 7517) containing a set of public keys.WT. Use the ref attribute to pass the key in a flow variable, or specify the PEM-encoded key directly. Use this only with the VerifyJWT policy, when the algorithm is an RSA variant.
> | https://docs.apigee.com/api-platform/reference/policies/verify-jwt-policy | 2019-02-15T21:11:46 | CC-MAIN-2019-09 | 1550247479159.2 | [array(['https://docs.apigee.com/api-platform/images/icon_policy_security.jpg',
None], dtype=object) ] | docs.apigee.com |
Getting Started Overview
RadWindow is a part of the Telerik® UI for ASP.NET AJAX suite. It is a container that can display content from the same page (when used as controls container) or it can display a content page, different from the parent one. In the second case, the control uses an IFRAME and behaves like one.
This getting started article will walk you through creating a web page that shows how to:
- Start a RadWindow when the web page first loads.
- Use a RadWindowManager to manage multiple windows.
- Launch a RadWindow when the user clicks on a control.
- Use skins to alter the appearance of a window.
- Create modal and non-modal windows.
- Specify the behavior and position of a window.
For more information on how to work with and control RadWindow, please check the Client-Side/Server-Side Programming sections in this documentation.
Creating a Window on Startup
To create a RadWindow, drag a RadWindow control from the Visual Studio Toolbox onto an existing form.
In the Behavior section of the Properties menu, set the Title property to "Telerik Web Site".
Set the VisibleOnPageLoad property to True.
This property is used to make the example simpler. The Opening from the server help article explains how to open a RadWindow with server code in more detail, as well as the implications of the VisibleOnPageLoad property and when it should not be used.
In the Navigation section, set the NavigateUrl property to
Press F5 to run the application.
You should see the window pop up immediately. Experiment with the window by moving it on the form, using the "pin" button, maximizing and restoring it, and finally closing the window.
Using RadWindowManager with Multiple Windows
Although you can use RadWindow controls directly, if you are working with multiple RadWindow controls it is a good idea to use a RadWindowManager control. It allows you to create RadWindow instances dynamically - with JavaScript alone, without as much as a postback. Also, it can be used at a central place to set the common properties for all dialogs it opens, thus minimizing the configuration steps you need to perform.
You'll now modify the previous example to delete the RadWindow you just created and use a RadWindowsManager control to host two RadWindow controls.
Delete the RadWindow control from your form. You will now use a RadWindowManager to create the RadWindow controls instead.
Drag a RadWindowManager from the Toolbox onto your form.
Expand the Misc section of the Properties window, find the Windows property and click on the ellipsis button to display the RadWindow Collection Editor.
In the RadWindow Collection Editor, click the Add button to create a new RadWindow control.Use the Properties pane to set its properties to match those of the window you created before:
- Set the Title property to "Telerik Web Site".
- Set the NavigateUrl property to
- Do not set the VisibleOnPageLoad property this time.
Click the Add button again to create another RadWindow control. Use the Properties pane to set the following properties:
- Set the ID property to "rwDialog".
- Set the Modal property to True.
Click Ok to exit the RadWindow Collection Editor for now.
Creating a Form for the Dialog
Before continuing with the multiple window example, you need to create a dialog form for the second RadWindow.
In the Solution Explorer, right-click on the project and choose Add New Item to display the Add New Item dialog box .
Add a new Web Form, giving it the name "MyDialog.aspx".
In the body of the Web Form, enter the literal text "My dialog content here...".
Move to the Source view and change the title for this form to "My Dialog". The markup should look similar to this:
ASP.NET
<head runat="server"> <title>My Dialog</title> </head> <body> <form id="form1" runat="server"> <div> My dialog content here...</div> </form> </body>
Launching Windows from Another Control
Return to your default form and add a Button control from the Standard section of the toolbox. Set its properties as follows:
- Set the ID property to "btnTelerik".
- Set the Text property to "Show Telerik site".
Copy the Button control to create a second Button. Set its ID property to "btnDialog" and its Text property to "Show My Dialog".
Select the RadWindowManager, and use the Windows property to bring up the RadWindow Collection editor again.
- Select the first RadWindow in the collection and set its OpenerElementId property to "btnTelerik".
The OpenerElementId property requires the ClientID of the HTML element that will open the RadWindow when clicked.
- Select the second RadWindow in the collection (rwDialog), and set its OpenerElementID property to "btnDialog". Set its NavigateUrl property to "MyDialog.aspx".
There are other ways to open a dialog and they are explained in the Opening Windows help article.
- Click Ok to exit the dialog.
Right-click the RadWindowManager control and select "Show Smart Tag". Use the Smart Tag to set the Skin to "Vista".
Press F5 to run the application. Click both buttons to bring up both windows. They both get the "Vista" look from the RadWindowManager skin. Note the differences in behavior. The "Telerik Web Site" window gets its title from the RadWindow control, while "My Dialog" gets its title from the HTML markup of MyDialog.aspx. The "Telerik Web Site" window is not modal, while "My Dialog" is modal.
Exit the application.
Altering the appearance and behavior of individual windows
Select the RadWindowManager, and use the Windows property to bring up the RadWindow Collection editor again.
Select the first RadWindow control in the list (the "Show Telerik site" window).
- Set its Skin property to "Default 2006".
- Set its Top property to 30 and its Left property to 0.
- Set its OffsetElementId property to "btnTelerik".
- Set its VisibleStatusBar property to False.
Select the second RadWindow control in the list (the "Show My Dialog" window).
- Set its Behaviors property to "Close, Move".
- Set both its Height and Width properties to 200.
Click Ok to exit the Collection editor.
Run the application.
- Click the "Show Telerik site" label. The window appears 30 pixels below the label you clicked. (If you had not set an OffsetElementId, its position would be relative to the upper left corner of the web page instead of to the label.) Note that the window title bar has changed its appearance to reflect the new skin and that there is no status bar.
- Click the "Show My Dialog" label. Note that this window still reflects the skin you set in the RadWindowManager. The title bar has lost all controls except the close button. Note that you can move the window, but not resize it from the 200 by 200 size you set in the designer.
| https://docs.telerik.com/devtools/aspnet-ajax/controls/window/getting-started/overview.html | 2019-02-15T21:24:51 | CC-MAIN-2019-09 | 1550247479159.2 | [array(['images/radwindow1.png', None], dtype=object)
array(['images/window-twowindows.png', None], dtype=object)
array(['images/window-positionetc.png', None], dtype=object)] | docs.telerik.com |
Quick Overview
In these articles you will learn the basics of Ucommerce. Learn what is the catalog foundation, the marketing foundation, pipelines, search and much more. Get a brief and quick overview of Ucommerce by watching one or more of the videos in the playlist below, which goes through all the features of Ucommerce. For more detailed description of the individual bits visit one of the underlying sections. | https://docs.ucommerce.net/ucommerce/v7.16/quick-overview/index.html | 2019-02-15T21:04:27 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.ucommerce.net |
The C# UI makes it possible to quickly create a simple UI without using external tools such as Scaleform. Currently the C# UI is able to:
- Show text and images.
- Receive user-input.
- Render directly to the screen, or render to a texture.
- Organise the UI with layout groups.
Introduction
The UI in C# runs basically on two classes. The first class is the
UIElement-class, which inherits from
SceneObject and adds support for a
RectTransform. The
RectTransform defines information about the location, orientation and size of each
UIElement. The second class is the
UIComponent-class, which defines the behavior of the
UIElement. Each
UIElement can have multiple
UIComponents.
Every UI starts with a
Canvas. The
Canvas is responsible for drawing the UI to the screen or to a render texture and delegating events to its child
UIElements. Every
UIElement has to be a child of a
Canvas. A scene can have multiple
Canvas instances at the same time. | https://docs.cryengine.com/pages/viewpage.action?pageId=27593748 | 2020-08-03T23:37:10 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.cryengine.com |
About keys, secrets, and certificates
Azure Key Vault enables Microsoft Azure applications and users to store and use several types of secret/key data:
- Cryptographic keys: Supports multiple key types and algorithms, and enables the use of Hardware Security Modules (HSM) for high value keys. For more information, see About keys.
- Secrets: Provides secure storage of secrets, such as passwords and database connection strings. For more information, see About secrets.
- Certificates: Supports certificates, which are built on top of keys and secrets and add an automated renewal feature. For more information, see About certificates.
- Azure Storage: Can manage keys of an Azure Storage account for you. Internally, Key Vault can list (sync) keys with an Azure Storage Account, and regenerate (rotate) the keys periodically. For more information, see Manage storage account keys with Key Vault.
For more general information about Key Vault, see About Azure Key Vault.
Data types
Refer to the JOSE specifications for relevant data types for keys, encryption, and signing.
- algorithm - a supported algorithm for a key operation, for example, RSA1_5
- ciphertext-value - cipher text octets, encoded using Base64URL
- digest-value - the output of a hash algorithm, encoded using Base64URL
- key-type - one of the supported key types, for example RSA (Rivest-Shamir-Adleman).
- plaintext-value - plaintext octets, encoded using Base64URL
- signature-value - output of a signature algorithm, encoded using Base64URL
- base64URL - a Base64URL [RFC4648] encoded binary value
- boolean - either true or false
- Identity - an identity from Azure Active Directory (AAD).
- IntDate - a JSON decimal value representing the number of seconds from 1970-01-01T0:0:0Z UTC until the specified UTC date/time. See RFC3339 for details regarding date/times, in general and UTC in particular.
Objects, identifiers, and versioning
Objects stored in Key Vault are versioned whenever a new instance of an object is created. Each version is assigned a unique identifier and URL. When an object is first created, it's given a unique version identifier and marked as the current version of the object. Creation of a new instance with the same object name gives the new object a unique version identifier, causing it to become the current version.
Objects in Key Vault can be addressed by specifing a version or by omitting version for operations on current version of the object. For example, given a Key with the name
MasterKey, performing operations without specifing a version causes the system to use the latest available version. Performing operations with the version-specific identifier causes the system to use that specific version of the object.
Objects are uniquely identified within Key Vault using a URL. No two objects in the system have the same URL, regardless of geo-location. The complete URL to an object is called the Object Identifier. The URL consists of a prefix that identifies the Key Vault, object type, user provided Object Name, and an Object Version. The Object Name is case-insensitive and immutable. Identifiers that don't include the Object Version are referred to as Base Identifiers.
For more information, see Authentication, requests, and responses
An object identifier has the following general format:
https://{keyvault-name}.vault.azure.net/{object-type}/{object-name}/{object-version}
Where: | https://docs.microsoft.com/en-us/azure/key-vault/general/about-keys-secrets-certificates | 2020-08-04T01:00:36 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.microsoft.com |
GAP
GAP.
Detailed Description
GAP.
The commands and events in this class are related to the Generic Access Profile (GAP) in Bluetooth.
Enumeration Type Documentation
◆ gap_address_type_t
Device Address Types.
◆ gap_phy_type_t
PHY Types in gap Class. will generate a new private resolvable address and use it in advertising data packets and scanning requests. whitelisting. The setting will be effective the next time that scanning is enabled. To add devices to the whitelist, either bond with the device or add it manually with sl_bt_sm_add_to_whitelist.
- Parameters
-
- Returns
- SL_STATUS_OK if successful. Error code otherwise. | https://docs.silabs.com/bluetooth/latest/group-sl-bt-gap | 2020-08-04T00:04:01 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.silabs.com |
Monitoring Group Policy Changes with Windows Auditing
I spent some time a while back analyzing logs, figuring out what you can do with group policy auditing on Windows Server 2003. I did not test Windows 2000; I suspect that much of this applies but YMMV.
GP editing does leave an auditable trail of directory accesses and file accesses. Here is how to enable auditing for Group Policy, and how to interpret the results.
- To get the audit trail from AD, you must do the following:
- Using AD Users and Computers, create an auditing ACE in the SACL as follows:
- Object to set SACL on: domain (ex: contoso.com)
- Principal: everyone
- Type: success
- Accesses: Create groupPolicyContainer Object, Delete groupPolicyContainer Object
- Scope: This container and all sub-containers and objects
- Using AD Users and Computers, create an auditing ACE in a SACL as follows:
- Object to set SACL on: domain (ex: contoso.com)
- Principal: everyone
- Type: success
- Accesses: all "write" type accesses including deletes, but no "read" type accesses
- Scope: groupPolicyContainer objects
- Enable Directory Service Access Success auditing in the Default Domain Controllers Policy.
- To get the audit trail from the file system, you must do the following:
- Using Explorer, navigate to: \\<domainname>\sysvol\<domainfqdn>
(ex: \\contoso\sysvol\contoso.com)
- Set the following SACL on the \policies directory in that location
- Directory to set SACL on: \\<domainname>\sysvol\<domainfqdn>\policies
- Principal: everyone
- Type: success
- Accesses: all "Write" accessess, but no "Read" accesses.
Note: Do not audit Write Attributes or Write Extended Attributes
- Scope: This container and all sub-containers and objects
- Enable Object Access Success auditing in the Default Domain Controllers Policy.
Here are the Audit records that are generated if you do this. The fields to pay special attention to are underlined. My comments are in red.
- DS Access audit records.
Whenever GPEdit.msc is opened when targeted at an existing group policy object,
an audit record similar to the following will be generated. I have added notes in red,
these don't appear in the log.
Type: Audit Success
Event ID: 566
Time: 4/3/2005 7:39:13 PM
Source: Security
Computer: ACSDEMO-COLL
Category: Directory Service Access
User: CONTOSO\Administrator
Description:
Object Operation:
Object Server: DS
Operation Type: Object Access
Object Type: %{f30e3bc2-9ff0-11d1-b603-0000f80367c1} (this GUID means that the object is of type groupPolicyObject)
Object Name: CN={31B2F340-016D-11D2-945F-00C04FB984F9},CN=Policies,CN=System,DC=contoso,DC=com
(policy objects are always have GUIDs for common names- use GPMC to find the friendly name)
(the Default Domain policy and Default Domain Controllers policy have well-known GUIDs; all others are random)
Handle ID: -
Primary User Name: ACSDEMO-COLL$ (primary user is always the machine, since DS runs as localsystem)
Primary Domain: CONTOSO
Primary Logon ID: (0x0,0x3E7)
Client User Name: Administrator (client user is the user who made the change, being impersonated by DS)
Client Domain: CONTOSO
Client Logon ID: (0x0,0x2F6C8)
Accesses: Write Property
Properties: Write Property
%{771727b1-31b8-4cdf-ae62-4fe39fadf89e}
%{bf967a76-0de6-11d0-a285-00aa003049e2}
%{f30e3bc2-9ff0-11d1-b603-0000f80367c1}
Additional Info:
Additional Info2:
Access Mask: 0x20
- Variations on DS Access Events
- For creation of group policy objects, event 566 is logged for the parent object, indicating the "Create Child" access, and one or more 566 events are generated for the new group policy container object as the objects' properties are initialized.
- For deletion of group policy objects, event 566 is logged for the policy object, indicating the "Delete" access.
- File Access Audit Records
- When group policy objects are edited, there will be telltale traces in the file system audit trail on sysvol. The typical event will look like:
Type: Audit Success
Event ID: 560
Time: 4/3/2005 7:39:14 PM
Source: Security
Computer: ACSDEMO-COLL
Category: Object Access
User: CONTOSO\Administrator
Description:
Object Open:
Object Server: Security
Object Type: File
Object Name: C:\WINDOWS\SYSVOL\domain\Policies\{31B2F340-016D-11D2-945F-00C04FB984F9}\GPT.INI
What file name you see here depends on which settings were edited. Note that for security policy GPEdit works on a temp file and then writes the original:
C:\WINDOWS\SYSVOL\domain\Policies\{31B2F340-016D-11D2-945F-00C04FB984F9}\MACHINE\Microsoft\Windows NT\SecEdit\GptTmpl.tmp
C:\WINDOWS\SYSVOL\domain\Policies\{31B2F340-016D-11D2-945F-00C04FB984F9}\MACHINE\Microsoft\Windows NT\SecEdit\GptTmpl.inf
I've described most of the file names you're likely to see, further down in this post.
Handle ID: 2600
Operation ID: {0,1741006}
Process ID: 4
Image File Name:
Primary User Name: ACSDEMO-COLL$
Primary Domain: CONTOSO
Primary Logon ID: (0x0,0x3E7)
Client User Name: Administrator
Client Domain: CONTOSO
Client Logon ID: (0x0,0x1A8F5C)
We cannot audit exactly which setting changed. I have bugged the group policy team any number of times about this but I think due to resource issues this won't improve much in the forseeable future. The bottom line is that GPEDIT.MSC edits the policy file directly; there's no intervening trusted service to instrument for audit. In a future release of Windows I hope to fix this.
However, you can narrow changes down to settings groups [security vs. non-security] depending on the file that was touched on sysvol. If security policy is touched, then GptTmpl.inf will change. If the list of adms used to construct the policy changes, admfiles.ini will change. If the registry-based settings outside security policy change, then registry.pol will change.
Here's my brief key of which directory\file names refer to which settings group. Given the directory structure for a single policy (\\domain\sysvol\domain.fqdn\policies\{policyguid}\):
- In the root of the policy directory is a GPT.INI file containing the version number of the associated GPO. Every time GPEdit is used to edit this object, GPT.INI can be expected to change.
- \Machine - This directory includes a Registry.pol file that contains the registry settings to be applied to computers. When a computer boots up and establishes its secure channel to a domain controller, application advertisement files (.aas files) used by the Windows installer.
- \Microsoft\Windows NT\Secedit - Contains the Gpttmpl.inf file, which includes the default security configuration settings for a Windows Server 2003 domain controller.
- \Adm - Contains all of the .adm files for the GPO. The adm files are the files which control the Group Policy editor user interface.
- \User - This directory.
Here is some additional information on the structure of group policy:
I can't solve all your group policy monitoring woes, I just wanted to document what you'll see in the logs. There are at least three products on the market that can monitor which specific settings changed that you can buy if you need more functionality than what I've described here.
Best regards,
Eric | https://docs.microsoft.com/en-us/archive/blogs/ericfitz/monitoring-group-policy-changes-with-windows-auditing | 2020-08-03T22:58:14 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.microsoft.com |
An IHtmlContentBuilder implementation using an in memory list.
Extension methods for IHtmlContentBuilder.
An IHtmlContent implementation of composite string formatting
(see) which HTML encodes
formatted arguments.
An IHtmlContent implementation that wraps an HTML encoded String.
HTML content which can be written to a TextWriter.
A builder for HTML content.
Defines a contract for IHtmlContent instances made up of several components which
can be copied into an IHtmlContentBuilder.
Thank you. | https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.html?view=aspnetcore-3.1 | 2020-08-04T00:37:32 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.microsoft.com |
Plugin quality checklist¶
If you want to write a high-quality pretix plugin, this is a list of things you should check before you publish it. This is also a list of things that we check, if we consider installing an externally developed plugin on our hosted infrastructure.
A. Meta¶
The plugin is clearly licensed under an appropriate license.
The plugin has an unambiguous name, description, and author metadata.
The plugin has a clear versioning scheme and the latest version of the plugin is kept compatible to the latest stable version of pretix.
The plugin is properly packaged using standard Python packaging tools.
The plugin correctly declares its external dependencies.
A contact address is provided in case of security issues.
B. Isolation¶
If any signal receivers use the dispatch_uid feature, the UIDs are prefixed by the plugin’s name and do not clash with other plugins.
If any templates or static files are shipped, they are located in subdirectories with the name of the plugin and do not clash with other plugins or core files.
Any keys stored to the settings store are prefixed with the plugin’s name and do not clash with other plugins or core.
Any keys stored to the user session are prefixed with the plugin’s name and do not clash with other plugins or core.
Any registered URLs are unlikely to clash with other plugins or future core URLs.
C. Security¶
All important actions are logged to the shared log storage and a signal receiver is registered to provide a human-readable representation of the log entry.
All views require appropriate permissions and use the
event_urlsmechanism if appropriate. Read more
Any session data for customers is stored in the cart session system if appropriate.
If the plugin is a payment provider:
-
No credit card numbers may be stored within pretix.
-
A notification/webhook system is implemented to notify pretix of any refunds.
-
If such a webhook system is implemented, contents of incoming webhooks are either verified using a cryptographic signature or are not being trusted and all data is fetched from an API instead.
D. Privacy¶
No personal data is stored that is not required for the plugin’s functionality.
For any personal data that is saved to the database, an appropriate data shredder is provided that offers the data for download and then removes it from the database (including log entries).
E. Internationalization¶
All user-facing strings in templates, Python code, and templates are wrapped in gettext calls.
No languages, time zones, date formats, or time formats are hardcoded.
Installing the plugin automatically compiles
.pofiles to
.mofiles. This is fulfilled automatically if you use the
setup.pyfile form our plugin cookiecutter.
F. Functionality¶
If the plugin adds any database models or relationships from the settings storage to database models, it registers a receiver to the
pretix.base.signals.event_copy_dataor
pretix.base.signals.item_copy_datasignals.
If the plugin is a payment provider:
A webhook-like system is implemented if payment confirmations are not sent instantly.
Refunds are implemented, if possible.
In case of overpayment or external refunds, a “required action” is created to notify the event organizer.
If the plugin adds steps to the checkout process, it has been tested in combination with the pretix widget.
G. Code quality¶
isort and flake8 are used to ensure consistent code styling.
Unit tests are provided for important pieces of business logic.
Functional tests are provided for important interface parts.
Tests are provided to check that permission checks are working.
Continuous Integration is set up to check that tests are passing and styling is consistent. | https://docs.pretix.eu/en/latest/development/api/quality.html | 2020-08-03T22:49:58 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.pretix.eu |
Dynamic Order Properties: Adding Custom Information to Baskets, Orders, and Order Lines
When working with an e-commerce transaction system in an online store typically the information stored on the orders and order lines will come from the products themselves in the catalog system.
However, in some cases you might want to add custom information such as a personalized message, an indication whether gift wrapping is required, serial numbers, measurements, or something else entirely.
Enter Dynamic Order Properties
For this specific scenario Ucommerce supports dynamic order properties, which is a way to add information to basket, orders, and individual order lines.
You can add as many as you want and you don’t have to stick with the same number of properties or even the same names across orders.
Using Dynamic Order Properties
Order properties:
ITransactionLibrary transactionLibrary = ObjectFactory.Instance.Resolve<Ucommerce.Api.ITransactionLibrary>(); transactionLibrary.SetOrderProperty("hello", "world");
OrderLine properties:
Ucommerce.EntitiesV2.PurchaseOrder basket = transactionLibrary.GetBasket(false); basket.OrderLines.First()["hello"] = "world"; basket.Save();
Reading properties:
string myValue = basket["key"]; string hello = basket.OrderLines.First()["world"];
Admin
Dynamic order properties will be displayed in Ucommerce Admin as part of the order overview.
Summary. | https://docs.ucommerce.net/ucommerce/v9.2/getting-started/transaction-foundation/dynamic-order-properties.html | 2020-08-03T23:48:22 | CC-MAIN-2020-34 | 1596439735836.89 | [array(['images/dynamicorderprop.png', 'Image'], dtype=object)] | docs.ucommerce.net |
Data provider integration
In these articles, you will find information regarding the Data provider integration.
What is it?
It is a deep level integration, basically making the catalog part of the Ucommerce data available in Sitecore Master database as Items.
Notice: There are two levels of Sitecore "databases". * The logical level where you have "Web", "Master" and "Core". * The physical level where you have MSSQL server databases called typically "Sitecore.Web", "Sitecore.Master" and "Sitecore.Core".
Ucommerce items are available on the logical level. Not on the physical level.
In Sitecore's content tree, there are three nodes, that serves as roots for Ucommerce data:
- /sitecore/Ucommerce
- /sitecore/system/Ucommerce
- /sitecore/templates/User Defined/Ucommerce definitions
All the data under these three nodes, are dynamically/runtime converted from Ucommerce data to Sitecore Items.
All the data is 100% stored in the Ucommerce database. No data is stored in the physical Sitecore databases.
When Sitecore loads data from it's physical "Sitecore.Master" database, the Ucommerce data is merged into the data returned to the Sitecore system.
So the rest of the Sitecore system, cannot tell the difference of Items from Sitecore's physical databases and data from the Ucommerce database. All Items gets returned as equals in the "Master" logical database.. | https://docs.ucommerce.net/ucommerce/v9.2/sitecore/Data-provider-integration.html | 2020-08-03T23:39:29 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.ucommerce.net |
Installing UiPath Test Manager for Jira
- Open your Jira instance in your browser.
- Log in as an administrator.
- Click Manage Apps from the administration menu.
- Under Find new Apps search for UiPath.
- Select UiPath Test Manager for Jira.
- Click Install. The latest version of UiPath Test Manager for Jira is installed.
Preparing a Test Manager Project for Integration with Jira
Follow the steps below to connect a Jira project with a project from Test Manager:
- Log in as an admin user.
- Click the app symbol in the upper left corner to open the application menu.
- In the Administration section, click the Project Settings option. The Projects page is displayed.
- Select the testing project you want to connect to a Jira project. The details for that project are displayed on a separate page.
- Click New Connection > Jira. The Edit connection window is displayed.
- Fill out the form by providing the following data:
- Name - A name for this connection
- An optional description
- Server URL - The URL of the server which exposes the Jira REST API. This is usually identical to the URL you navigate to when using Jira through your browser.
- Web URL - Optional. Just in case the Jira REST API is not hosted under the same URL like the web UI, enter the URL of the Jira REST API here.
- Jira Credentials - Used to authenticate to the Jira API. All created objects will have this identity as the creator assigned.
- Defect Type - Specify the Jira object-type to be used for defects (usually Bug). Select None to disable defect integration for this connection.
- Project Key - Enter the project key from Jira which refers to the project you want to integrate.
- Click Save to finish. You are returned to the project details page.
- Expand the panel which holds the connection details.
- Copy the API Key to the Clipboard for later use.
Configuring your Jira Project
- Open Jira and navigate to the project you want to integrate.
- Switch to the project settings.
- Scroll down and select UiPath Test Manager which opens the Test Manager configuration page.
- Select all Jira issue types which should be synchronized as Requirements to Test Manager.
- Paste the API Key you copied at step 9 from the previous procedure.
- Enter the Server URL from Test Manager.
- Click Save.
Updated 7 days ago | https://docs.uipath.com/test-suite/docs/connecting-a-jira-project | 2020-08-03T23:41:16 | CC-MAIN-2020-34 | 1596439735836.89 | [array(['https://files.readme.io/d1261c6-edit_connection.png',
'edit_connection.png'], dtype=object)
array(['https://files.readme.io/d1261c6-edit_connection.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/9314c47-TMH_Config_Jira.png',
'TMH Config Jira.png'], dtype=object)
array(['https://files.readme.io/9314c47-TMH_Config_Jira.png',
'Click to close...'], dtype=object) ] | docs.uipath.com |
Memory saving
PhpSpreadsheet uses an average of about 1k per cell in your worksheets, so large workbooks can quickly use up available memory. Cell caching provides a mechanism that allows PhpSpreadsheet to maintain the cell objects in a smaller size of memory, or off-memory (eg: on disk, in APCu, memcache or redis). This allows you to reduce the memory usage for large workbooks, although at a cost of speed to access cell data.
By default, PhpSpreadsheet holds all cell objects in memory, but you can specify alternatives by providing your own PSR-16 implementation. PhpSpreadsheet keys are automatically namespaced, and cleaned up after use, so a single cache instance may be shared across several usage of PhpSpreadsheet or even with other cache usages.
To enable cell caching, you must provide your own implementation of cache like so:
$cache = new MyCustomPsr16Implementation(); \PhpOffice\PhpSpreadsheet\Settings::setCache($cache);
A separate cache is maintained for each individual worksheet, and is automatically created when the worksheet is instantiated based on the settings that you have configured. You cannot change the configuration settings once you have started to read a workbook, or have created your first worksheet.
Beware of TTL
As opposed to common cache concept, PhpSpreadsheet data cannot be re-generated from scratch. If some data is stored and later is not retrievable, PhpSpreadsheet will throw an exception.
That means that the data stored in cache must not be deleted by a third-party or via TTL mechanism.
So be sure that TTL is either de-activated or long enough to cover the entire usage of PhpSpreadsheet.
Common use cases
PhpSpreadsheet does not ship with alternative cache implementation. It is up to you to select the most appropriate implementation for your environment. You can either implement PSR-16 from scratch, or use pre-existing libraries.
One such library is PHP Cache which provides a wide range of alternatives. Refers to their documentation for details, but here are a few suggestions that should get you started.
APCu
Require the packages into your project:
composer require cache/simple-cache-bridge cache/apcu-adapter
Configure PhpSpreadsheet with something like:
$pool = new \Cache\Adapter\Apcu\ApcuCachePool(); $simpleCache = new \Cache\Bridge\SimpleCache\SimpleCacheBridge($pool); \PhpOffice\PhpSpreadsheet\Settings::setCache($simpleCache);
Redis
Require the packages into your project:
composer require cache/simple-cache-bridge cache/redis-adapter
Configure PhpSpreadsheet with something like:
$client = new \Redis(); $client->connect('127.0.0.1', 6379); $pool = new \Cache\Adapter\Redis\RedisCachePool($client); $simpleCache = new \Cache\Bridge\SimpleCache\SimpleCacheBridge($pool); \PhpOffice\PhpSpreadsheet\Settings::setCache($simpleCache);
Memcache
Require the packages into your project:
composer require cache/simple-cache-bridge cache/memcache-adapter
Configure PhpSpreadsheet with something like:
$client = new \Memcache(); $client->connect('localhost', 11211); $pool = new \Cache\Adapter\Memcache\MemcacheCachePool($client); $simpleCache = new \Cache\Bridge\SimpleCache\SimpleCacheBridge($pool); \PhpOffice\PhpSpreadsheet\Settings::setCache($simpleCache); | https://phpspreadsheet.readthedocs.io/en/latest/topics/memory_saving/ | 2020-08-03T23:56:35 | CC-MAIN-2020-34 | 1596439735836.89 | [] | phpspreadsheet.readthedocs.io |
Uninstalling the Process Designer core application and integration to BMC IT Service Management or BMC Service Request Management
By running the PDICT uninstaller, you delete forms and fields containing data. If you need this data, export it first.
Note
You can also manually perform the uninstall process. Contact BMC support for more information on how to perform the uninstall steps manually.
To uninstall the Process Designer Integration to IT Service Management or SRM and the Core Process Designer Application:
- Launch the Process Designer Installation and Configuration Tool (PDICT).
- After logging on, select the Uninstall option.
Click Finish to begin the uninstall process.
Note
If you manually installed any customizations or the Process Designer application, some objects might be installed in Best Practice Mode, and some in Base Development Mode. Run the uninstaller twice, first in one mode, and then the other. You can skip any errors that occur.
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/brid90/uninstalling-the-process-designer-core-application-and-integration-to-bmc-it-service-management-or-bmc-service-request-management-562335671.html | 2020-08-04T00:04:24 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.bmc.com |
This content pack contains saved searches, data patterns, collection profiles, and dashboards.
The following table describes the components included in this content pack.
Displays events from the trace log files available in the work directory.
Included in dashboard: Yes
Contains data collector templates that can be used for collecting data from the following work directory log files.
The data collector templates collect data from the following location:
${SAP SID}\DVEBMGS${Instance Number}\work
In the preceding location path, {SAP SID} and {Instance Number} are variable macro values.These can be defined as follows:
These are macros contained in the collection profile. You can provide appropriate values for the macros. By doing this, the resulting data collectors will automatically collect data from the particular directory locations. | https://docs.bmc.com/docs/display/itdacp/SAP | 2020-08-03T23:48:20 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.bmc.com |
Information: You can find the integrator.io release notes for May 2020 here.
What’s new
Support for Return Merchandise Authorization (RMA)
The integration app now offers an option to create, edit, and sync Return Merchandise Authorization (RMA) from Zendesk. This feature is available in the standard edition of the integration app.
You can now see the following flows in your integration app:
- Zendesk RMA to NetSuite Return Authorization: Sync new and updated RMAs associated with the sales orders from Zendesk to NetSuite in near real-time. Select or search for a sales order to create a return. This flow is available in the standard edition of the integration app.
- NetSuite Return Authorization to Zendesk Organization / User: Sync RMAs from NetSuite to Zendesk in near real-time.
- NetSuite Sales Order to Zendesk Organization / User: Sync sales orders from NetSuite to Zendesk as per the batch schedule.
You can now view the status of an RMA which might be in progress, failed, or completed. You can remove a failed RMA.
Note: You can’t create or edit a return if it is not linked to a sales order.
What’s enhanced
All enhancements are available in the starter and standard editions of the integration app. To avail the enhancements, configure the new flows “NetSuite Sales Order to Zendesk Organization / User,” and “NetSuite Return Authorization to Zendesk Organization / User.”
Ability to search sales orders
A new search bar is introduced in the integration app that allows you to search the orders using one search key field. The OrderNumber field in Zendesk is mapped by default to NetSuite Document ID. You can modify the mapping accordingly.
Example: If you enter a search key as 2345, the integration app searches the sales order with number 2345.
Increased the number of sales orders and RMAs
You can view up to 30 sales orders and RMAs each against an organization or a user. The limit can be increased to a maximum limit of 40 using the flow mapping settings.
Configure custom fields in sales order and RMA headers
You can now sync and view a maximum of three custom fields in a sales order header fields. Add the fields to NetSuite Saved Search and change the mappings in the “NetSuite Sales Order to Zendesk Organization / User” flow. In addition, you may utilize a maximum of three custom fields in RMA header fields by adding the fields to App Settings in Zendesk and changing mappings in the “NetSuite Return Authorization to Zendesk Organization / User” flow.
Retiring existing flows for NetSuite Sales Orders to Zendesk
The existing flows to sync NetSuite sales orders are going to retire in November 2020. The flows are not available in any after the integration app v11.0 is released. You are recommended to enable the “NetSuite Sales Order to Zendesk Organization / User” flow to avoid any loss in functionality. Retired flows are as follows.
- NetSuite Customer Sales Order Details To Zendesk Organization (*To be retired)
- NetSuite Customer Sales Order Details To Zendesk User (*To be retired)
All the above enhancements are not available with the above to be retired flows.
Upgrade your integration app
We've upgraded the existing infrastructure. Ensure that your integration app is updated to the following infrastructure versions.
- Integration App v1.11 last week of May. If you are using a NetSuite Sandbox account, please upgrade NetSuite Bundles.
What’s fixed
The integration app installation wizard is updated to create distributed records in sequential order to avoid installation errors.
Known issues
There are no known issues in this release.
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/360043391512-Zendesk-NetSuite-release-notes-v1-11-0-May-2020 | 2020-08-03T23:53:23 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.celigo.com |
Azure App Service plan overview
In App Service, an app runs in an App Service plan. An App Service plan defines a set of compute resources for a web app to run. These compute resources are analogous to the server farm in conventional web hosting. One or more apps can be configured to run on the same computing resources (or in the same App Service plan).
When you create an App Service plan in a certain region (for example, West Europe), a set of compute resources is created for that plan in that region. Whatever apps you put into this App Service plan run on these compute resources as defined by your App Service plan. Each App Service plan defines:
- Region (West US, East US, etc.)
- Number of VM instances
- Size of VM instances (Small, Medium, Large)
- Pricing tier (Free, Shared, Basic, Standard, Premium, PremiumV2, Isolated).2 tier for App Service.
How does my app run and scale?
In the Free and Shared tiers, an app receives CPU minutes on a shared VM instance and cannot scale out. In other tiers, an app runs and scales as follows.
When you create an app in App Service, it is put into an App Service plan. When the app runs, it runs on all the VM instances configured in the App Service plan. If multiple apps are in the same App Service plan, they all share the same VM instances. If you have multiple deployment slots for an app, all deployment slots also run on the same VM instances. If you enable diagnostic logs, perform backups, or run WebJobs, they also use CPU cycles and memory on these VM instances.
In this way, the App Service plan is the scale unit of the App Service apps. If the plan is configured to run five VM instances, then all apps in the plan run on all five instances. If the plan is configured for autoscaling, then all apps in the plan are scaled out together based on the autoscale settings.
For information on scaling out an app, see Scale instance count manually or automatically.
How much does my App Service plan cost?
This section describes how App Service apps are billed. For detailed, region-specific pricing information, see App Service Pricing.
Except for Free tier, an App Service plan carries.
You don't get charged for using the App Service features that are available to you (configuring custom domains, TLS/SSL certificates, deployment slots, backups, etc.). The exceptions are:
- App Service Domains - you pay when you purchase one in Azure and when you renew it each year.
- App Service Certificates - you pay when you purchase one in Azure and when you renew it each year.
- IP-based TLS connections - There's an hourly charge for each IP-based TLS connection, but some Standard tier or above gives you one IP-based TLS connection for free. SNI-based TLS connections are free.
Note
If you integrate App Service with another Azure service, you may need to consider charges from these other services. For example, if you use Azure Traffic Manager to scale your app geographically, Azure Traffic Manager also charges you based on your usage. To estimate your cross-services cost in Azure, see Pricing calculator..
What if my app needs more capabilities or features?
Your App Service plan can be scaled up and down at any time. It is as simple as changing the pricing tier of the plan. You can choose a lower pricing tier at first and scale up later when you need more App Service features.
For example, you can start testing your web app in a Free App Service plan and pay nothing. When you want to add your custom DNS name to the web app, just scale your plan up to Shared tier. Later, when you want to create a TLS binding, scale your plan up to Basic tier. When you want to have staging environments, scale up to Standard tier. When you need more cores, memory, or storage, scale up to a bigger VM size in the same tier.
The same works in the reverse. When you feel you no longer need the capabilities or features of a higher tier, you can scale down to a lower tier, which saves you money.
For information on scaling up the App Service plan, see Scale up an app in Azure.
If your app is in the same App Service plan with other apps, you may want to improve the app's performance by isolating the compute resources. You can do it by moving the app into a separate App Service plan. For more information, see Move an app to another App Service plan.
Should I put an app in a new plan or an existing plan?
Since you pay for the computing resources your App Service plan allocates (see How much does my App Service plan cost?), you can potentially save money by putting multiple apps into one App Service plan. You can continue to add apps to an existing plan as long as the plan has enough resources to handle the load. However, keep in mind that apps in the same App Service plan all share the same compute resources. To determine whether the new app has the necessary resources, you need to understand the capacity of the existing App Service plan, and the expected load for the new app. Overloading an App Service plan can potentially cause downtime for your new and existing apps.
Isolate your app into a new App Service plan when:
- The app is resource-intensive.
- You want to scale the app independently from the other apps in the existing plan.
- The app needs resource in a different geographical region.
This way you can allocate a new set of resources for your app and gain greater control of your apps. | https://docs.microsoft.com/da-dk/azure/app-service/overview-hosting-plans | 2020-08-04T01:28:41 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.microsoft.com |
Navigating XAML Elements with the Document Outline Window
Microsoft Silverlight will reach end of support after October 2021. Learn more.
The Document Outline window is a useful tool to view a XAML document in a hierarchical fashion. You can use the Document Outline window to preview or select XAML elements.
.png)
Viewing Your Document in the Document Outline Window
The following are the ways that you can open the Document Outline window in a Silverlight project.
On the View menu, point to Other Windows, and then select Document Outline.
Use the keyboard shortcut CTRL+ALT+T.
In Design view of the Silverlight Designer, right-click and select Document Outline on the shortcut menu.
In the lower-left corner of the Silverlight Silverlight Designer.
See Also | https://docs.microsoft.com/en-us/previous-versions/windows/silverlight/dotnet-windows-silverlight/ff602283%28v%3Dvs.95%29 | 2020-08-04T01:19:22 | CC-MAIN-2020-34 | 1596439735836.89 | [array(['images/ff356887.sl_designerdocoutline(en-us,vs.95',
'Document Outline Window Document Outline Window'], dtype=object)] | docs.microsoft.com |
Currently trials last 14 days.
Our trials are fully featured. So if you enter your card information in the billing section of the Cloud Console your trial will automatically convert to a paid plan at the end of the trial period.
You can cancel your Rocket.Chat workplace directly within the Cloud Console.
Send an email to [email protected] with the address of your workspace. Note: The request will take some time to fullfill.
Please, note that in case you cancel your workspace in the middle of billing period it will be accessible and in the “Cancelling” status till the end of your billing period.
If, for example, you are charged on the 5th of every month and decided to cancel the subscription on the 20th of December or later - your workspace will be operational till the 5th of January (this way we want to give customers the opportunity to use what they paid for). After this, it will stop running and will switch to “Cancelled”.
If you want to end your subscription, please note that it can be done only by the workspace administrator in your Cloud Console (cloud.rocket.chat) : navigate to Workspaces -> click on the three dots at the end of the correspondent workspace line -> select Cancel. This will stop your subscription and hibernate your server (your server will still exist in case you want to get back to Rocket.Chat later).
If you need a database dump or if you want to permanently delete your workspace and all the data associated with it - submit a ticket here on our Helpdesk or drop an email to [email protected] with the respective request.
Please, note that this can only be done by our Cloud engineers. If you want to create an additional workspace - submit a ticket here on our Helpdesk or drop us a letter at [email protected] - include workspace address you would like and the plan. NOTE: Additional workspaces are billed based on their individual usage. So if you have 5 users on one and 10 on the other you will get billed for them both separately.
If you need to grant the ownership of your workspace to another person or to change the primary email of your workspace - submit a ticket here on our Helpdesk or drop us an email to [email protected] with the respective request.
NOTE: ticket should be submitted from the admin email address (the email address the workspace is registered under) and should contain the email address the ownership should be granted to/the email address it (admin one) should be changed to.
Please, note that we charge our customers afterwards, not in advance. This means that on a particular day of each month you will be charged for the previous month of use.
If, for example, you started your trial on December 1st, it expired on December 15th and your subscription was automatically continued (you added payment method before trial expired) - on January 15th you will receive an invoice for the previous month (December 15th - January 15th).
If your trial expired and you didn’t manage to add your payment method to continue subscription, navigate to Payment methods in your Cloud Console (cloud.rocket.chat) -> click on Add payment method (top right corner) to add your card (credit/debit card is the only payment method we accept at the moment).
Region is defined upon creation. Please, note that customers can not migrate their instances between regions on their own. This process involves manual work required by Rocket.Chat Cloud team. If you need to switch region - submit a ticket or drop an email to [email protected].
Please, note that you can not delete the card that is the only one (default one) linked to your workspace as well as you can not delete card that was charged last - in both cases you will see the error message “Can't delete last payment option”.
If you need to change the card - add it as a new payment method and make it the default one (after that you will be able to delete all other cards).
If you want to remove the card information before canceling your subscription, please note that your payment data can only be deleted along with all the other data associated with your workspace. In order to request that, submit a ticket here on our Helpdesk or drop an email to [email protected].
Credit/debit card is the only payment method we accept at the moment.
For companies paying up-front for a specific period we do provide invoices to pay by wire transfer.
Adding more instances to your Cloud account can only be done by our engineers. Reach out to us at [email protected] and specify the following data for the new workspace you want to add:
workspace name
SaaS plan and billing period (monthly or annual payment)
number of seats
region of the deployment (US or EU)
Cloud account email can be changed at cloud.rocket.chat on the Profile page. If you have difficulties changing the email of the account owner contact us at [email protected]. The request must be sent from the original account owner email.
To request a custom domain, set up a CNAME DNS record for the domain name you want to have pointing to "cdns.use1.cloud.rocket.chat" (for US region) and to "cdns.euc1.cloud.rocket.chat" (for EU region). Afterwards, send us an email to [email protected] so we could make respective changes to your workspace.
Please note that custom domain is available in Silver and Gold plans only on SaaS offering. | https://docs.rocket.chat/rocket.chat-saas/faq | 2020-08-04T00:08:25 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.rocket.chat |
Vroozi links: API guide, Authentication
1. Set up a Vroozi connection
Start establishing a connection to Vrooz Vrooz Vroozi account information
At this point, you’re presented with a series of options for providing Vroozi authentication.
Account type (required): Select one of the following, depending on the account you’re connecting to.
- Sandbox
- Production
API key (required): Enter the API key for your Vroozi app. Multiple layers of protection are in place, including AES 256 encryption, to keep your connection’s API key and Access token safe.
Access token (required): Enter the API token for your Vroozi app.
Log in to your Vroozi developer account at go.vroozi.com/ or sandbox-go.vroozi.com/, and enter your username and password.
Once you’ve logged in, click Credentials under the API integration menu.
Click Add new application to generate a Vroozi application and copy its connection information. Name the new application and click Save.
After you’ve added the application, Vroozi displays the Access token – just once. Refreshing it later will generate a new token.
Finally, copy the app’s API key for use in the connection.
3. Edit advanced Vroozi settings
Before continuing, you have the opportunity to provide additional configuration information, if needed, for the Vroozi connection.
4. Test and save
Once you have configured the Vroozi. | https://docs.celigo.com/hc/en-us/articles/360041787332-Set-up-a-connection-to-Vroozi | 2020-08-04T00:26:28 | CC-MAIN-2020-34 | 1596439735836.89 | [array(['/hc/article_attachments/360054503812/vroozi.png', None],
dtype=object)
array(['/hc/article_attachments/360054631351/vroozi-name.png', None],
dtype=object)
array(['/hc/article_attachments/360054503892/vroozi-adv.png', None],
dtype=object)
array(['/hc/article_attachments/360054504012/vroozi-admin-login.png',
None], dtype=object)
array(['/hc/article_attachments/360054631591/vroozi-admin-creds.png',
None], dtype=object)
array(['/hc/article_attachments/360054504352/vroozi-admin-newapp.png',
None], dtype=object)
array(['/hc/article_attachments/360054504452/vroozi-admin-token.png',
None], dtype=object)
array(['/hc/article_attachments/360054535892/vroozi-admin-app-id.png',
None], dtype=object)
array(['/hc/article_attachments/360056320832/paypal-adv.png', None],
dtype=object)
array(['/hc/article_attachments/360054632111/paypal-confirm.png', None],
dtype=object) ] | docs.celigo.com |
DockManager.AddPanel(DockingStyle) Method
Creates a new dock panel and docks it to the form (user control) using the specified dock style.
Namespace: DevExpress.XtraBars.Docking
Assembly: DevExpress.XtraBars.v20.1.dll
Declaration
public DockPanel AddPanel( DockingStyle dock )
Public Function AddPanel( dock As DockingStyle ) As DockPanel
Parameters
Returns
Remarks
The panel created by this method is docked to the DockManager's container control (DockManager.Form). The DockManager.Form property must refer to a valid object (not null), otherwise an exception will occur when this method is called.
The dock parameter specifies how the panel is docked. It can be set to the following values.
- DockingStyle.Top, DockingStyle.Left, DockingStyle.Bottom or DockingStyle.Right - A panel will be docked to the target form's corresponding edge.
- DockingStyle.Float - A panel will be floating.
- DockingStyle.Fill - A panel will be docked to the target form's center. Ensure the DockingOptions.AllowDockToCenter property is enabled.
The order in which the AddPanel method is called to create dock panels is important, since it affects the order of the panels within the container control. The first panel docked to the left, for instance, will occupy the left edge of the container control. The second panel docked to the top will occupy the top edge of the container control which is not occupied by the first panel, etc.
If a new panel is then docked to the left edge, it will be docked as follows:
Thus new panels are added to the panel's collection so that they occupy the corresponding edge of the container control's empty region. A panel's DockPanel.Index property specifies the position of the panel amongst the other panels residing on the same parent control. To dock a panel to a specific position within the panels' collection, use the DockPanel.DockTo overload which takes the dock and index parameters. For more information on the order of panels within the parent see the DockPanel.Index topic.
Visible panels that have been created and docked by the AddPanel method can be obtained via the DockManager.RootPanels collection. This collection does not include hidden panels, panels whose auto-hide functionality is enabled and panels which are docked to other panels.
Examples
The following code shows how to add a dock manager to a form and create a panel.
using DevExpress.XtraBars.Docking; // ... // Create a dock manager DockManager dm = new DockManager(); // Specify the form to which the dock panels will be added dm.Form = this; // Create a new panel and dock it to the left edge of the form DockPanel dp1 = dm.AddPanel(DockingStyle.Left); dp1.Text = "Panel 1"; | https://docs.devexpress.com/WindowsForms/DevExpress.XtraBars.Docking.DockManager.AddPanel(DevExpress.XtraBars.Docking.DockingStyle) | 2020-08-04T00:59:30 | CC-MAIN-2020-34 | 1596439735836.89 | [array(['/WindowsForms/images/dockpanel_index_2panels3906.png',
'DockPanel_Index_2Panels'], dtype=object)
array(['/WindowsForms/images/dockmanager_addpanel_3panels3911.png',
'DockManager_AddPanel_3Panels'], dtype=object) ] | docs.devexpress.com |
DK11 for .NET | TatukGIS.NDK.TGIS_LayerVectorSqlAbstract.ConnectionPoolId | Constructors | Fields | Methods | Properties | Events
Connection pool id. Used to group shared connections per viewer or create new connection when a layer is not attached to the viewer. User can switch to another shared connection by closing current connection, assigning existing pool id and again opening the connection (only if a layer is not attached to the viewer). A layer attached to a viewer always gets the viewer pool id.
Available also on: Delphi | Java | ActiveX.
// C# public String ConnectionPoolId { get {} set {} }
' VisualBasic Public Property ConnectionPoolId As String Get End Get Set(ByVal value As String) End Set End Property
// Oxygene public property ConnectionPoolId : String read read; | https://docs.tatukgis.com/DK11/api:dk11:net:tatukgis.ndk.tgis_layervectorsqlabstract.connectionpoolid | 2020-08-03T23:39:11 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.tatukgis.com |
Transport API¶
Transports are used for direct communication with the Riemann server. They are usually used inside a Client, and are used to send and receive protocol buffer objects.
- class riemann_client.transport.BlankTransport¶
Bases: riemann_client.transport.Transport
A transport that collects messages in a list, and has no connection
Used by --transport none, which is useful for testing commands without contacting a Riemann server. This is also used by the automated tests in riemann_client/tests/test_riemann_command.py.
- exception riemann_client.transport.RiemannError¶
Bases: exceptions.Exception
Raised when the Riemann server returns an error message
- class riemann_client.transport.SocketTransport(host='localhost', port=5555)¶
Bases: riemann_client.transport.Transport
Provides common methods for Transports that use a sockets
- class riemann_client.transport.TCPTransport(host='localhost', port=5555, timeout=None)¶
Bases: riemann_client.transport.SocketTransport
Communicates with Riemann over TCP
- class riemann_client.transport.TLSTransport(host='localhost', port=5555, timeout=None, ca_certs=None)¶
Bases: riemann_client.transport.TCPTransport
Communicates with Riemann over TCP + TLS
Options are the same as TCPTransport unless noted
- connect()¶
Connects using TLSTransport.connect() and wraps with TLS
- class riemann_client.transport.Transport¶
Bases: object
Abstract transport definition
Subclasses must implement the connect(), disconnect() and send() methods.
Can be used as a context manager, which will call connect() on entry and disconnect() on exit.
- class riemann_client.transport.UDPTransport(host='localhost', port=5555)¶
Bases: riemann_client.transport.SocketTransport | https://riemann-client.readthedocs.io/en/latest/riemann_client.transport.html | 2020-08-04T00:04:24 | CC-MAIN-2020-34 | 1596439735836.89 | [] | riemann-client.readthedocs.io |
AppointmentConflictEventArgs Class
Provides data for the SchedulerControl.AllowAppointmentConflicts event.
Namespace: DevExpress.XtraScheduler
Assembly: DevExpress.XtraScheduler.v20.1.Core.dll
Declaration
public class AppointmentConflictEventArgs : AppointmentEventArgs
Public Class AppointmentConflictEventArgs Inherits AppointmentEventArgs
Remarks
The SchedulerControl.AllowAppointmentConflicts event occurs when the scheduler finds appointments that are in conflict and the SchedulerOptionsCustomization.AllowAppointmentConflicts property is set to Custom. The AppointmentConflictEventArgs class introduces the AppointmentConflictEventArgs.Conflicts property which returns the collection of appointments that are considered to be in conflict with the current appointment, and the AppointmentConflictEventArgs.Interval property that specifies the time interval of the appointment. The processed appointment is identified by the AppointmentEventArgs.Appointment property.
An instance of the AppointmentConflictEventArgs class with appropriate settings is automatically created and passed to the corresponding event's handler.
Examples
This example demonstrates how to manually determine whether there are appointment conflicts or not in some particular situations. If you're not satisfied with the automatic conflicts determination of the XtraScheduler (when appointments are considered to conflict if they have the same resource and their time intervals intersect), you can set the SchedulerOptionsCustomization.AllowAppointmentConflicts property to Custom and handle the SchedulerControl.AllowAppointmentConflicts event to perform your own conflict determination.
In the following example it's assumed that an appointment is a lecture conducted by a teacher in a classroom, and several groups of students may be present at the same lecture at the same time. Two appointments are consider to be in conflict in the following situations.
- Two different lecturers are scheduled to conduct a lecture at the same time in the same room.
- The same lecturer is scheduled to conduct different lectures at the same time in different rooms.
- The same group is scheduled at the same time in different rooms.
using DevExpress.XtraScheduler; // ... // Appointment = Lecture // Resource = Room // appointment.CustomFields["Teacher"] = Teacher // appointment.CustomFields["Group"] = Group of Students // Start of the AllowAppointmentConflicts event handler. // ================================================== private void schedulerControl1_AllowAppointmentConflicts(object sender, AppointmentConflictEventArgs e) { Appointment currentLecture = e.Appointment; object roomId = currentLecture.ResourceId; string teacher = GetTeacher(currentLecture); string group = GetGroup(currentLecture); DateTime start = currentLecture.Start; TimeSpan duration = currentLecture.Duration; // e.Conflicts contains all the lectures held at the same time. // All of them the SchedulerControl considers to be conflicting. AppointmentBaseCollection lecturesInTheSameTime = e.Conflicts; AppointmentBaseCollection conflictedLectures = new AppointmentBaseCollection(); AppointmentBaseCollection lecturesInDifferentRoomWithDifferentTeacher = new AppointmentBaseCollection(); ArrayList groupsInDifferentRoomWithDifferentTeacher = new ArrayList(); ArrayList groups = new ArrayList(); int count = lecturesInTheSameTime.Count; for (int i = 0; i < count; i++) { Appointment lecture = lecturesInTheSameTime[i]; // Check if the lecture is in the same room. if (Object.Equals(lecture.ResourceId, roomId)) { // Check if the lecture is by the same teacher. if (String.Compare(teacher, GetTeacher(lecture), true) == 0) { if (lecture.Start != start || lecture.Duration != duration) { // Conflict! Lecture with a bad time frame. conflictedLectures.Add(lecture); } else { // No conflict! groups.Add(GetGroup(lecture)); } } // Lecture by a different teacher. else { // Conflict! Lecture by a different teacher in the same room. conflictedLectures.Add(lecture); } } // Lecture in a different room. else { // Lecture by the same teacher. if (String.Compare(teacher, GetTeacher(lecture), true) == 0) { // Conflict! Lecture of the same teacher in the different room. conflictedLectures.Add(lecture); } // Lecture by a different teacher. else { // No conflict! Lecture by a different teacher in a different room. groupsInDifferentRoomWithDifferentTeacher.Add(GetGroup(lecture)); lecturesInDifferentRoomWithDifferentTeacher.Add(lecture); } } } // Search for the groups which should be in different rooms at the same time. count = groups.Count; for (int i = 0; i < count; i++) { int conflictIndex = groupsInDifferentRoomWithDifferentTeacher.IndexOf(groups[i]); if (conflictIndex >= 0) conflictedLectures.Add(lecturesInDifferentRoomWithDifferentTeacher[conflictIndex]); } e.Conflicts.Clear(); // If the conflictedLectures will contain no lectures after this event has occured, // then e.Conflicts will be empty and this will indicate that there are no conflicts. e.Conflicts.AddRange(conflictedLectures); } // End of the AllowAppointmentConflicts event handler. // ================================================== // Additional functions. // ==================== // Determines the teacher for the specified lecture. string GetTeacher(Appointment lecture) { object teacher = lecture.CustomFields["Teacher"]; if (teacher == null) return String.Empty; return teacher.ToString(); } // Determines the group for the specified lecture. string GetGroup(Appointment lecture) { object group = lecture.CustomFields["Group"]; if (group == null) return String.Empty; return group.ToString(); } | https://docs.devexpress.com/CoreLibraries/DevExpress.XtraScheduler.AppointmentConflictEventArgs | 2020-08-04T01:20:30 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.devexpress.com |
Using an Event Callback to Manage Buffered Playback
To use an event callback, use the CreateEvent function to retrieve the handle of an event. In a call to the midiOutOpen function, specify CALLBACK_EVENT for the dwFlags parameter. After using the midiOutPrepareHeader function but before sending MIDI events to the device, create a nonsignaled event by calling the ResetEvent function, specifying the event handle retrieved by CreateEvent. Then, inside a loop that checks whether the MHDR_DONE bit is set in the dwFlags member of the MIDIHDR structure, use the WaitForSingleObject function, specifying the event handle and a time-out value of INFINITE as parameters.
An event callback is set by anything that might cause a function callback.
Because event callbacks do not receive specific close, done, or open notifications, an application may need to check the status of the process it is waiting for after the event occurs. It is possible that a number of tasks could be completed by the time WaitForSingleObject returns. | https://docs.microsoft.com/en-us/windows/win32/multimedia/using-an-callback-to-manage-buffered-playback | 2020-08-04T00:39:28 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.microsoft.com |
This is still a draft and doesn't cover all subjects about Slack Compatibility yet
Would you like to have your app listed in Rocket.Chat's Marketplace but don't want to rewrite all the backend for your Slack listing?
Look no further!
This "compatibility layer" will help you make your Rocket.Chat App talk to your backend in no time :)
Initialize your Rocket.Chat App with bindings that make it compatible with your Slack App implementation.
First of all, you're gonna need to scaffold out a Rocket.Chat App directory, so make sure you follow our Getting Started guide (it's really quick).
We're gonna build upon the example in our Getting Started guide with the LiftOff app. After you've created the new app with our CLI, you should have the following folder structure:
liftoff/| .vscode/| node_modules/| .editorconfig| .gitignore| LiftoffApp.ts| app.json| icon.png| package-lock.json| package.json| tsconfig.json| tslint.json
Now, in your app's directory, you can install the Slack Compatibility Layer:
$ npm install RocketChat/slack-compatibility-for-apps
This will install the package directly from our repository on GitHub. We will publish it to NPM when it gets a bit more feature complete and battle tested.
That's almost it! Rocket.Chat Apps cannot include npm packages yet, so this package will copy itself to a
vendor folder in your app so that you can use it. Your folder structure should be like this now:
liftoff/| .vscode/| node_modules/| vendor/| | slack-compatible-layer/| | | src/| | | vendor/| | | SlackCompatibleApp.ts| .editorconfig| .gitignore| LiftoffApp.ts| app.json| icon.png| package-lock.json| package.json| tsconfig.json| tslint.json
Now you just have to extend the
SlackCompatibleAppclass instead of the default
App class from the Apps-Engine - this will make your Rocket.Chat App understand "Slack language". The only thing left to do is config the main class of your app with the features you have:
import {IAppAccessors,ILogger,} from '@rocket.chat/apps-engine/definition/accessors';// import { App } from '@rocket.chat/apps-engine/definition/App';import { IAppInfo } from '@rocket.chat/apps-engine/definition/metadata';import { SlackCompatibleApp as App } from './vendor/slack-compatible-layer/SlackCompatibleApp';export class LiftOffApp extends App {public config = {interactiveEndpoint: '',slashCommands: [{command: 'liftoff',requestURL: '',shortDescription: 'Tells the user if it is time to liftoff'}],}constructor(info: IAppInfo, logger: ILogger, accessors: IAppAccessors) {super(info, logger, accessors);}}
That's it! | https://docs.rocket.chat/apps-development/slack-compatibility | 2020-08-04T00:20:18 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.rocket.chat |
Note:
The fix(es) herein are only applicable for initial deployments of Insights. Customers with existing Insights deployments who experience this issue are encouraged to contact UiPath support.
- When running a Repair / Modify installation of the Insights installer, the
InsightsAdminToolconfiguration would become corrupted and unable to decrypt the existing database connection strings.
Updated 26 days ago | https://docs.uipath.com/releasenotes/docs/insights-2019-10-6 | 2020-08-03T23:54:25 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.uipath.com |
This page shows how to enable and configure encryption of secret data at rest.== - name: key2 secret: dGhpcyBpcyBwYXNzd29yZA== - secretbox: keys: - name: key1 secret: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NT.
Note:
The alpha version of the encryption feature prior to 1.13 required to be configured with
kind: EncryptionConfig and
apiVersion: v1... This was a stepping stone in development to the
kms provider, introduced in 1.10, and beta since 1.12. provider. | https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/encrypt-data/ | 2020-08-04T00:34:03 | CC-MAIN-2020-34 | 1596439735836.89 | [] | v1-15.docs.kubernetes.io |
Batch Modifications Overview
- 10 minutes to read
Batch modifications are designed to speed up a grid control's performance by eliminating superfluous updates (visual, re-sorting, selection updates, etc.). The main objective is to update the View only once - after all the necessary changes have been made.
Preventing Excessive Visual Updates
Every time you change a property or call a method that affects a grid's visual appearance, it is updated to reflect its new state. In some cases, changing a property only affects the corresponding display element, not the entire View, and so only this element is repainted. In most cases however, modifying a property or calling a method affects an entire View, and therefore the View is updated. To update a view, the BaseView.LayoutChanged method is called.
If you perform a sequence of modifications that cause View updates, you may notice that performance suffers, especially when these operations are time-consuming. For instance, deleting hundreds of rows in a View via the ColumnView.DeleteRow method may take a while, because the View is updated after every deletion. However, you can speed up such an operation by preventing unnecessary updates between individual operations. The grid control provides several methods to support such batch modifications.
Before applying a series of changes to a view, you can call the BaseView.BeginUpdate method. This locks the View and prevents subsequent visual updates. After you've performed all the necessary operations on a view, call the BaseView.EndUpdate method. This immediately updates the View to reflect all recent changes, as well as re-enables it for future updates.
If changes need to be applied to several Views at the same time, you could use the BaseView.BeginUpdate and BaseView.EndUpdate methods for each of the Views. However, an easier solution is to use the GridControl.BeginUpdate method instead, which locks all open Views in the grid control. Similarly, to unlock all the Views at once call the GridControl.EndUpdate method.
The BeginUpdate and EndUpdate methods use an internal counter to implement the appropriate functionality. The counter has an initial value of 0. Each call to the BeginUpdate method increments this counter by one. Each call to the EndUpdate method decrements this counter by one and if its new value is zero, the View is updated. Note that each call to BeginUpdate must be paired with a call to EndUpdate. If you call BeginUpdate, but forget to call EndUpdate afterwards or EndUpdate isn't called because an exception occurred, the View will no longer be refreshed. To ensure that EndUpdate is always called even if an exception occurs, use the try...finally statement.
These points are applied to all the batch modification methods described in this section.
The following example shows how to use the BaseView.BeginUpdate and BaseView.EndUpdate methods to prevent superfluous updates from occurring when deleting records that meet a specific condition.
using DevExpress.XtraGrid.Views.Grid; using DevExpress.XtraGrid.Columns; //... GridView currentView = gridView1; GridColumn compColumn = gridView1.Columns["moduleid"]; int compValue = 3; int rowCount = currentView.DataRowCount; int rowHandle; currentView.BeginUpdate(); try { for (rowHandle = rowCount - 1; rowHandle >= 0; rowHandle--) { if ((int)currentView.GetRowCellValue(rowHandle, compColumn) == compValue) currentView.DeleteRow(rowHandle); } } finally { currentView.EndUpdate(); }
Preventing Excessive Internal Data Updates
The BeginUpdate and EndUpdate methods prevent only visual updates. When you change a property or call a method within a BeginUpdate and EndUpdate pair, the corresponding actions take effect immediately. But, the results are displayed only after EndUpdate has been called.
Some operations you may perform force the grid control to reload, re-sort or regroup data. Each of these operations causes an internal data update, and when such operations are performed in a sequence, multiple data updates occur. Excessive data updates cannot be avoided by using the BeginUpdate and EndUpdate methods. Instead, the BaseView.BeginDataUpdate and BaseView.EndDataUpdate methods must be used. These methods improve a control's performance when a sequence of any of the following operations is performed:
- made changes BeginDataUpdate and EndDataUpdate methods.
NOTE
Do not call the BeginDataUpdate and EndDataUpdate methods in master-detail mode if any detail is currently open, as this can cause some painting artifacts.
The ColumnView.BeginSort and ColumnView.EndSort methods are equivalent to the BeginDataUpdate and EndDataUpdate methods.
Example(); }
Preventing Excessive Selection Updates
Views enable you and end-users to select multiple records (rows in Grid Views, cards in Card Views). Refer to the Multiple Row and Cell Selection and End-User Capabilities: Selecting Rows/Cards documents for information on the methods, shortcuts and mouse operations available for working with selections.
The ColumnView.SelectionChanged event occurs every time the selection is changed. For instance, you can write code for an event that fills a list box with the values of the selected records. Every time you add a record to or delete a record from the selection, this event is raised, so your list box is maintained automatically.
Suppose you need to perform several successive selection operations via code (for instance, clear the selection and then call the ColumnView.SelectRow or the ColumnView.SelectRange method). The event will be raised and the View will be updated several times in these cases, depending upon the number of method calls made. To avoid such superfluous updates, use the BaseView.BeginSelection, and BaseView.EndSelection methods. The methods lock and unlock selection updates respectively.
After the BaseView.BeginSelection method is called, the ColumnView.SelectionChanged event does not occur, and the View is not updated when the selection is changed via code. BaseView.EndSelection enables selection updates, fires the ColumnView.SelectionChanged event and updates the View to reflect all the recent selection changes.
So, you can use these methods to prevent the display from flickering when selecting multiple rows via code and when performing time-consuming operations or operations that require screen updates via the ColumnView.SelectionChanged event.
Using the BaseView.BeginSelection and BaseView.EndSelection methods is similar to using the BeginUpdate and EndUpdate methods. You must always ensure that each call to BaseView.BeginSelection is followed by a corresponding call to the BaseView.EndSelection method. For this purpose, we advise you to use a try...finally statement. Nesting method calls are also supported.
The BaseView.BeginSelection method only locks selection related updates and does not lock any other updates. So if you need to prevent visual updates, you still need to use the BaseView.BeginUpdate method.
The following code demonstrates how to prevent selection updates when selecting rows that meet a specific condition.
using DevExpress.XtraGrid.Views.Grid; using DevExpress.XtraGrid.Columns; //... GridView currentView = advBandedGridView1; GridColumn compColumn = currentView.Columns["Discontinued"]; currentView.OptionsSelection.MultiSelect = true; currentView.ExpandAllGroups(); bool compValue = true; currentView.BeginSelection(); try { currentView.ClearSelection(); int rowHandle; for (rowHandle = 0; rowHandle < currentView.DataRowCount; rowHandle ++) if ((bool)currentView.GetRowCellValue(rowHandle, compColumn) == compValue) currentView.SelectRow(rowHandle); } finally { currentView.EndSelection(); }
In some cases, you can call the BaseView.CancelSelection method instead of the BaseView.EndSelection method. This method enables selection updates, but it does not fire the ColumnView.SelectionChanged event, and therefore does not repaint the View. For instance, if you call BaseView.BeginSelection, but don't change the selection, you can use BaseView.CancelSelection rather then BaseView.EndSelection to avoid the unnecessary final update.
Preventing Excessive Collection Updates
The grid control provides methods to prevent excessive updates when manipulating summary items and style conditions that are stored in corresponding collections.
Summary items are supported by the GridSummaryItemCollection class. The GridSummaryItemCollection.BeginUpdate and GridSummaryItemCollection.EndUpdate methods enable you to perform batch modifications. Use these methods in a similar way to the other BeginUpdate and EndUpdate methods. Just enclose the code that modifies the summary collection (adds or deletes individual summaries, changes summary settings), within the GridSummaryItemCollection.BeginUpdate and GridSummaryItemCollection.EndUpdate methods, and this will prevent superfluous summary re-calculations and View updates. Summaries will be re-calculated only once - after the GridSummaryItemCollection.EndUpdate method has been called.
The following code demonstrates how to prevent needless summary recalculations when adding two total summaries.
using DevExpress.XtraGrid; using DevExpress.XtraGrid.Views.Grid; //... GridView currentView = advBandedGridView1; GridSummaryItemCollection coll = currentView.Columns[0].SummaryItem.Collection; coll.BeginUpdate(); try { GridSummaryItem sumItem = currentView.Columns["Price"].SummaryItem; sumItem.SummaryType = DevExpress.Data.SummaryItemType.Max; sumItem.FieldName = "Price"; sumItem.DisplayFormat = "Max: {0:c0}"; sumItem = currentView.Columns["Model"].SummaryItem; sumItem.SummaryType = DevExpress.Data.SummaryItemType.Count; sumItem.FieldName = "Model"; sumItem.DisplayFormat = "Records: {0}"; } finally { coll.EndUpdate(); }
Note that we use a try...finally block to ensure that the GridSummaryItemCollection.EndUpdate method is always called after the GridSummaryItemCollection.BeginUpdate call.
The GridSummaryItemCollection class provides the GridSummaryItemCollection.CancelUpdate method, which you can use in some cases instead of the GridSummaryItemCollection.EndUpdate method. This method enables summary collection updates, but does not force an immediate summary recalculation and does not update the View.
For more information on summaries, refer to the Summaries and Summaries documents.
Style conditions are represented by the StyleFormatConditionCollection class. This class also provides the FormatConditionCollectionBase.BeginUpdate and FormatConditionCollectionBase.EndUpdate methods, which you can use to enclose the code that changes the collection, and so prevent unnecessary View updates. Use these methods in a similar way to the other batch modification methods. | https://docs.devexpress.com/WindowsForms/773/controls-and-libraries/data-grid/batch-modifications/batch-modifications-overview | 2020-08-04T01:07:27 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.devexpress.com |
Flush with care
Posted by: Sue Loh
I would like to add some comments to extend what was said on a recent post on the Windows Mobile blog. It says that developers can use RegFlushKey to make sure their registry data persists, but that legacy applications will be okay because we've built some persistence into the OS. I actually try to go further and discourage application developers from calling RegFlushKey!
Just like a file system cache, the registry persistence is NOT, repeat NOT supposed to be the responsibility of the application developer. They should be able to modify the registry however they want without sparing a thought for persistence. If applications get into the habit of calling RegFlushKey, they are only contributing to performance problems.
Instead it is up to us (the Windows CE and Windows Mobile teams) as well as up to the OEM to solve the persistence problem. We've put registry flushing into a few important places in the OS, and wrote an optional flush thread that OEMs can choose to enable. If you are an OEM, then okay, call RegFlushKey in a way that makes sense for your device. Or enable our registry flush thread. But if you're writing an application, use it with care so as not to negatively impact performance. | https://docs.microsoft.com/en-us/archive/blogs/ce_base/flush-with-care | 2020-08-04T00:33:52 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.microsoft.com |
An app is simply a sequence of events.
The best apps define an elegant sequence of events that happen without the user even noticing. A great sign in flow for example can happen in seconds but the logic and sequence behind them is a work of art that has been iterated on many times (often using Thunkable's Live Testing app).
This is where Thunkable Blocks come in.
Thunkable Blocks are the building blocks of a great experience for your app users. Every component has its own set of blocks to start or trigger an event and set and change properties.
They can be connected to a commonly used set of blocks that range from opening screens, setting up logic, reformatting data or simplifying code.
Below are some of the most commonly used blocks and where you might find them: | https://docs.thunkable.com/blocks | 2020-08-03T23:09:55 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.thunkable.com |
Allow your users to view your favorite PDFs -- legal contracts, art posters or maybe even a PhD dissertation -- all from the convenience of your app
Simply upload your PDFs into the File property and voila!
There is a limit of 50 MB per app so be careful or size down your larger PDFs if it may exceed this limit
Once you upload the PDF, you will be able to view it in your app and pinch to zoom in. | https://docs.thunkable.com/pdf-reader | 2020-08-03T23:52:07 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.thunkable.com |
In our day of mobile technologies, there is no reason not to stay connected with your coworkers on the go. It’s essential for teams to be able to collaborate, and sometimes a simple phone call or email doesn’t do enough. Conference calls are a great solution, but the traditional voice only phone conferences can get […]
| By conbopAdmin | In General, News
Increase Sponsor revenues with mobile Conference Apps
Conference managers have a lot on their plates – from planning the entire event, recruiting attendees, to coming up with ideas for revenue streams & sponsorships. In the midst of the rush that is event planning, we often revert back to what we know works – what we did last year. USBs, lunches, hospitality events […] | https://docs.xponow.com/category/news/ | 2020-08-03T23:36:22 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.xponow.com |
Payload Type docker containers are located at
Apfell/Payload_Types/each with their own folder. The currently running ones can be checked with the
sudo ./status_check.sh script. Check A note about containers for more information about them.
Containers allow Apfell to have each Payload Type establish its own operating environment for payload creation without causing conflicting or unnecessary requirements on the host system.
Payload Type containers only come into play for a few special scenarios:
Payload Creation
Module Load
Command Transforms
Outside of these scenarios, the payload type containers aren't used. This is why payload types can be declared External Types without causing much issue for Apfell overall.
For more information on editing or creating new containers for payload types, see Payload Type Development. | https://docs.apfell.net/payload-types/containers | 2020-08-03T23:53:37 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.apfell.net |
Note: We’ve renamed our SmartConnectors to Integration Apps..
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/228342527 | 2020-08-04T00:23:32 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.celigo.com |
Friday Five - April 12, 2013
1. Generate a Windows Phone 8 Local Database DataContext from an Existing Database
By SQL Server MVP Erik Jensen
2. Creating an Offline Web Platform Installer for Service Bus 1.0
By Visual Studio ALM MVP Ryan Crommwell
3. Fetching Windows Azure Mobile Services Data in XAML based Windows Store Application
By Integration MVP Dhananjay Kumar
4. Getting started with Git and TFS
By Visual Studio ALM MVP Esteban Garcia
5. Design-time Data for Windows Store Apps with C#
By Silverlight MVP Jeremy Likness | https://docs.microsoft.com/en-us/archive/blogs/mvpawardprogram/friday-five-april-12-2013 | 2020-08-04T00:30:26 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.microsoft.com |
Warning:
This extension is no longer functional after updating Windows 10 to version 1903. We recommend reinstalling the extension after performing this specific Windows 10 update.
This extension helps you create browser automations in Edge (Legacy). It can be installed from Studio, the Command Prompt, or from the Microsoft Store.
Before you develop automation projects in Edge, it's important to know that Microsoft released the New Edge Browser. As such, current automation projects in Edge (Legacy) are not compatible with the New Edge Browser.
Install the Extension for Edge
Requires Windows 10 versions 1803 and above.
Note:
Your machine needs to be part of a domain to be able to install the Edge extension.
From UiPath Studio
- Access the Tools page from the Studio Backstage View. The extensions you can install become visible.
- Click the Edge button. A confirmation dialog box is displayed. The extension is now installed..
- extension to install, then click the Launch button. The Edge browser is opened and a pop-up is displayed.
- Click the Turn it on button to activate the UiPath Extension for Edge. The extension is now installed and a new UiPath icon appears in the top right.
Troubleshoot Extension for Edge
Note:
The UiPath Extension for Edge can only be installed on Windows 10 versions 1803 and above.
Interactive Selection Fails After Installing the Extension
When Edge is opened, corresponding background processes are also created and remain active even after the browser is closed. After the extension is installed and the browser is closed, the Edge process is still running in the background and does not get updated with the new information to be able to generate selectors in Edge.
Before you create your first automation projects for Edge, you must close the browser and terminate the corresponding process from Task Manager after you install the extension.
Starting a Job From Orchestrator Fails
On Windows logon, the Edge browser automatically starts as a background process. The extensions page is also loaded in the background, but is closed after several seconds. However, the extensions page is not reloaded when Edge is booted up, making extensions unusable.
There are two methods to deal with this situation:
- Restart the Edge browser.
- Set a default browser other than Edge.
The "htmlWindowName" Attribute is Not Validated
Selectors which contain the
htmlWindowName attribute can not be validated. This is caused by a Windows known issue. Please note that this issue no longer occurs in Windows v1909 and greater.
Selectors Are Not Generated For Local Web Pages
If your process uses local web pages (files stored on the local machine) in Edge, selectors are not generated for any element on those pages. This is caused by a Windows known issue.
Imprecise Actions Performed by Type Activities
In particular cases, the Type Into, Type Secure Text, and Send Hotkey activities erroneously interact with their target elements. To prevent this, you need to enable the ClickBeforeTyping property.
As a general rule for browser automations, it is also recommended to enable the SimulateClick property for activities which perform click operations.
Updated 2 months ago | https://docs.uipath.com/installation-and-upgrade/docs/studio-extension-for-edge | 2020-08-04T00:28:55 | CC-MAIN-2020-34 | 1596439735836.89 | [array(['https://files.readme.io/d17437d-Edge_extension_icon.png',
'Edge extension icon.png'], dtype=object)
array(['https://files.readme.io/d17437d-Edge_extension_icon.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/6c09404-Edge_Browser_Popup.png',
'Edge Browser Popup.png'], dtype=object)
array(['https://files.readme.io/6c09404-Edge_Browser_Popup.png',
'Click to close...'], dtype=object) ] | docs.uipath.com |
Import files into Eagle
If you need to add files to Eagle, such as images and folders, please refer to the following ways:
Import Files
You can use the "Import" function to import files into Eagle.
- Drag and drop files into Eagle
- Use "Copy & Paste"
- Use "Eagle Package"
The "Eagle Package" is a file type only for Eagle users, so you can import images to Eagle users by using Eagle Package. Click for details.
Import an Organized File
If you import an organized folder, Eagle will automatically create a classification as your file's organization and the subfolders will be kept, so, you don't need to category the folder again
| https://docs-en.eagle.cool/article/521-import-images-into-eagle | 2020-08-03T22:54:57 | CC-MAIN-2020-34 | 1596439735836.89 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5cc6cd5d04286301e753d2f7/images/5de77ef42c7d3a7e9ae4b49f/file-sggP4T8Mtg.jpg',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5cc6cd5d04286301e753d2f7/images/5de77efc04286364bc927796/file-BD3io2Ycw4.jpg',
None], dtype=object) ] | docs-en.eagle.cool |
ToolTipController.ImageList Property
Gets or sets the source of the images that can be displayed within tooltips.
Namespace: DevExpress.Utils
Assembly: DevExpress.Utils.v20.1.dll
Declaration
[DefaultValue(null)] [DXCategory("Appearance")] public virtual object ImageList { get; set; }
<DefaultValue(Nothing)> <DXCategory("Appearance")> Public Overridable Property ImageList As Object
Property Value
Remarks
The ImageList property accepts the following image collections:
- ImageCollection - Supports image transparency.
- SharedImageCollection - Supports image transparency. Allows you to share images between controls within multiple forms.
- SvgImageCollection - Stores vector icons that can scale without losing their quality on high resolution devices.
- ImageList.
See the ToolTipController.ImageIndex topic for information on providing images for tooltips.
See Also
Feedback | https://docs.devexpress.com/WindowsForms/DevExpress.Utils.ToolTipController.ImageList | 2020-08-04T01:13:46 | CC-MAIN-2020-34 | 1596439735836.89 | [] | docs.devexpress.com |
Table of Contents
Out of the box Slackware PXE Server
Slackware has added a PXE server to its installer since the 13.37 release. It is intended to provide an easy method for network installations of Slackware, provided you have one spare computer with a network card (not a wireless card!!!) This article describes the procedure for a network installation using the built-in PXE server, using the Slackware 13.37 installation media as an example (but it will work for later versions of Slackware just as well):
Requirements
A Slackware DVD or bootable USB stick, containing a complete set of Slackware package directories. A net-boot “mini-ISO” or a bootable Slackware CDROM are not sufficient because they do not contain all Slackware packages. The PXE server in the installer is not able to use an external package source - all packages have to be present on the boot media..
Configuring the PXE server”:
-
- After the network interface has been configured, you will see a number of dialogs that let you determine whether the installer should start a DHCP server or not. If your network already runs a DHCP server, then it should not be disrupted by a “rogue” DHCP server! You will have an angry network administrator at your desk in no time.
Instead, pxesetup is smart enough that it only provides the required netboot functionality by acting as a proxy DHCP server:
-
- The setup program tries to make an educated guess about the range of IP addresses to be used if it is going to start a DHCP server. A dialog will present the proposed configuration. There are two configurable items in that dialog: the lower and upper values for the IP address range that will be used by the built-in DHCP server.
The IP addresses in this range will be available for the PXE clients that request a network boot configuration from the PXE server. Please check this address range, and if you think you have a computer in your network that uses an IP address in this range, you must change the values for the upper and/or lower values and resolve the conflict.
This range of IP addresses must not be used by any computer on your LAN !
- If you are satisfied with the values, select “OK” to continue to the next section.
- SOURCE:
The
SOURCEsection uses the exact same dialog screens as you know from the Slackware installer. The only correct selection is “
Use a Slackware DVD” (There is one exception which I will explain in more detail all the way down, and that is when you used the “
usbimg2disk.sh” script to create a complete Slackware installer on a bootable USB stick):
- The pxesetup program will find the Slackware DVD or CD and that’s it!
More information is not required and the PXE server will be started automatically. Another service is started as well at that moment: a HTTP server which will serve up Slackware packages to the clients that use our PXE server.
On-screen you will see the log file of the “
dnsmasq” program which provides most of the netboot functionality. The first screenshot is the case where your network provides a DHCP server, while the second screenshot shows the situation where the Slackware PXE server has started its own internal DHCP server:
- You can press the “EXIT” at any time, which will kill the PXE services (DHCP, TFTP and HTTP). You can then restart these services from the main menu again, by selecting the
ACTIVATEentry.
PXE server works, what about PXE clients
There is no fun with a PXE server if you do not have PXE clients that use it to boot from so that you can install Slackware on them! Make sure that the computer that you want to install Slackware on is connected to the network with a cable, and power it up. In the BIOS (or using whatever method is available for that machine) select “LAN boot” and watch what happens when the computer boots. You will see a prompt that says:
Press [F8] for a boot menu…
Actually pressing the F8 key gives you two choices: continue with netbooting, or fallback to boot-up from the local hard disk. Or if you don’t do anything at all (takes 2 seconds only) your network card will start looking for a PXE server and the communication starts. This can be witnessed on the PXE server’s screen:
What happens next should all look pretty familiar: the Slackware welcome screen will appear and you can either press ENTER for the default kernel or make your own choice of parameters. The noteworthy part is where you get to select the package
SOURCE. There is only one working option, and that is “
Install from FTP/HTTP server“. After selecting this option, your computer’s network card will be configured using DHCP, and then you will notice that the questions for “
URL of the ftp or http server where the Slackware sources are stored” and “
What is the Slackware source directory?” have default values already filled-in! You should accept these values, since they are supplied by the PXE server!
The remaining steps should be familiar if you have ever tried installing from a HTTP server before.
Using a USB based installer instead of the CD/DVD | http://docs.slackware.com/slackware:pxe_install | 2018-04-19T11:39:46 | CC-MAIN-2018-17 | 1524125936914.5 | [array(['https://docs.slackware.com/lib/plugins/bookcreator/images/add.png',
None], dtype=object)
array(['https://docs.slackware.com/lib/plugins/bookcreator/images/del.png',
None], dtype=object)
array(['/_media/slackware:pxe:pxeserver03.png', None], dtype=object)
array(['/_media/slackware:pxe:pxeserver19.png', None], dtype=object)] | docs.slackware.com |
Resetting Your Lost or Forgotten Passwords or Access Keys
If you lose or forget your passwords or access keys, you cannot retrieve them from IAM. Instead, you can reset them using the following methods:
AWS account root user password – If you forget your root user password, you can reset the password from the AWS Management Console. For details, see Resetting a Lost or Forgotten Root User Password later in this topic.
AWS account access keys – If you forget your account access keys, you can create new access keys without disabling the existing access keys. If you are not using the existing keys, you can delete those. For details, see Creating Access Keys for the Root User and Deleting Access Keys from the Root User.
IAM user password – If you are an IAM user and you forget your password, you must ask your administrator to reset your password. To learn how an administrator can manage your password, see Managing Passwords for IAM Users.
IAM user access keys – If you are an IAM user and you forget your access keys, you will need new access keys. If you have permission to create your own access keys, you can find instructions for creating a new one at Creating, Modifying, and Viewing Access Keys (Console). If you do not have the required permissions, you must ask your administrator to create new access keys. If you are still using your old keys, ask your administrator not to delete the old keys. To learn how an administrator can manage your access keys, see Managing Access Keys for IAM Users.
You should follow the AWS best practice of periodically changing your password and AWS access keys. In AWS, you change access keys by rotating them. This means that you create a new one, configure your applications to use the new key, and then delete the old one. You are allowed to have two access key pairs active at the same time for just this reason. For more information, see Rotating Access Keys.
Resetting a Lost or Forgotten Root User Password
When you first created your AWS account, you provided an email address and password. These are your AWS account root user credentials. If you forget your root user password, you can reset the password from the AWS Management Console.
To reset your root user password:
Use your AWS account email address to begin signing in to the AWS Management Console as the root user.
Note
If you are signed in to the AWS Management Console with IAM user credentials, then you must sign out before you can reset the root user password. If you see the account-specific IAM user sign-in page, choose Sign-in using root account credentials near the bottom of the page. If necessary, provide your account email address to access the Root user sign in page.
Choose Forgot your password?.
Provide the email address that you used to create the account. Then provide the CAPTCHA text and choose Continue.
Check the email that is associated with your AWS account for a message from Amazon Web Services. The email will come from an address ending in
@amazon.comor
@aws.amazon.com. Follow the directions in the email. If you don't see the email in your account, check your spam folder. If you no longer have access to the email, see I need to access an old account. | https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys_retrieve.html | 2018-04-19T11:53:46 | CC-MAIN-2018-17 | 1524125936914.5 | [] | docs.aws.amazon.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
To get started, request an instance of the AWSClientFactory via this class's static Instance
member. Use the factory instance to create clients for all the Web Services needed by
the application. | https://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/NNET45.html | 2018-04-19T12:05:38 | CC-MAIN-2018-17 | 1524125936914.5 | [] | docs.aws.amazon.com |
Generating Actions¶
Generating actions can be done using Django signals.
A special
action signal is provided for creating the actions.
from django.db.models.signals import post_save from actstream import action from myapp.models import MyModel # MyModel has been registered with actstream.registry.register def my_handler(sender, instance, created, **kwargs): action.send(instance, verb='was saved') post_save.connect(my_handler, sender=MyModel)
There are several ways to generate actions in your code. You can do it through custom forms or by overriding predefined model methods, such as Model.save(). More on this last option can be found here: <>.
The logic is to simply import the action signal and send it with your actor, verb, target, and any other important arguments.
from actstream import action from myapp.models import Group, Comment # User, Group & Comment have been registered with # actstream.registry.register action.send(request.user, verb='reached level 10') ... group = Group.objects.get(name='MyGroup') action.send(request.user, verb='joined', target=group) ... comment = Comment.create(text=comment_text) action.send(request.user, verb='created comment', action_object=comment, target=group)
Actions are stored in a single table in the database using Django’s ContentType framework and GenericForeignKeys to create associations with different models in your project.
Actions are generated in a manner independent of how you wish to query them so they can be queried later to generate different streams based on all possible associations. | http://django-activity-stream.readthedocs.io/en/latest/actions.html | 2018-04-19T11:14:47 | CC-MAIN-2018-17 | 1524125936914.5 | [] | django-activity-stream.readthedocs.io |
Search Carousel¶
When you click on an item in the Search Panel, Brightspot opens the item in the Content Edit Page—and lists all other found items in a search carousel.
The search carousel is useful when you need to make updates to similar or related items; clicking an item in the search carousel opens the item in the content edit form. There is no need to continuously return to the search results and open the next item requiring an update.
(Brightspot administrators can disable the search carousel. For details, see Settings for the Global Site.)
See also: | http://docs.brightspot.com/cms/editorial-guide/search/search-carousel.html | 2018-04-19T11:31:40 | CC-MAIN-2018-17 | 1524125936914.5 | [] | docs.brightspot.com |
txacme: A Twisted implementation of the ACME protocol¶
ACME is Automatic Certificate Management Environment, a protocol that allows clients and certificate authorities to automate verification and certificate issuance. The ACME protocol is used by the free Let’s Encrypt Certificate Authority.
txacme is an implementation of the protocol for Twisted, the
event-driven networking engine for Python.
txacme is still under heavy development, and currently only an
implementation of the client side of the protocol is planned; if you are
interested in implementing or have need of the server side, please get in
touch!
txacme’s documentation lives at Read the Docs, the code on GitHub.
It’s rigorously tested on Python 2.7, 3.4+, and PyPy. | http://txacme.readthedocs.io/en/stable/ | 2018-04-19T11:14:49 | CC-MAIN-2018-17 | 1524125936914.5 | [] | txacme.readthedocs.io |
Using Elastic Beanstalk with Amazon S3
Amazon S3 provides highly durable, fault-tolerant data storage. Behind the scenes, Amazon S3 stores objects redundantly on multiple devices across multiple facilities in a region.
Elastic Beanstalk creates an Amazon S3 bucket named
elasticbeanstalk-
region-
account-id for
each region in which you create environments. Elastic Beanstalk uses this bucket to store application
versions, logs, and other supporting files.
Elastic Beanstalk applies a bucket policy to buckets it creates to allow environments to write to the bucket and prevent accidental deletion. If you need to delete a bucket that Elastic Beanstalk created, first delete the bucket policy from the Permissions section of the bucket properties in the Amazon S3 Management Console.
To delete an Elastic Beanstalk storage bucket (console)
Open the Amazon S3 Management Console
Select the Elastic Beanstalk storage bucket.
Choose Properties.
Choose Permissions.
Choose Edit Bucket Policy.
Choose Delete.
Choose OK.
Choose Actions and then choose Delete Bucket.
Type the name of the bucket and then choose Delete. | http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.S3.html | 2017-04-23T15:53:34 | CC-MAIN-2017-17 | 1492917118713.1 | [] | docs.aws.amazon.com |
obspy.signal.konnoohmachismoothing.konno_ohmachi_smoothing_window¶
- konno_ohmachi_smoothing_window(frequencies, center_frequency, bandwidth=40.0, normalize=False)[source]¶
Returns the Konno & Ohmachi Smoothing window for every frequency in frequencies.
Returns the smoothing window around the center frequency with one value per input frequency defined as follows (see [Konno1998]):
[sin(b * log_10(f/f_c)) / (b * log_10(f/f_c)]^4 b = bandwidth f = frequency f_c = center frequency
The bandwidth of the smoothing function is constant on a logarithmic scale. A small value will lead to a strong smoothing, while a large value of will lead to a low smoothing of the Fourier spectra. The default (and generally used) value for the bandwidth is 40. (From the Geopsy documentation)
All parameters need to be positive. This is not checked due to performance reasons and therefore any negative parameters might have unexpected results. | http://docs.obspy.org/packages/autogen/obspy.signal.konnoohmachismoothing.konno_ohmachi_smoothing_window.html | 2018-03-17T06:10:03 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.obspy.org |
App Service on Linux Documentation
5-Minute Quickstarts
Learn how to deploy continuously:
Step-by-Step Tutorials
Learn how to deploy, manage, and monitor secure web application on App Service on Linux
- Create an application using .NET Core with Azure SQL DB or Node.js with MongoDB
- Map an existing custom domain to your application
- Bind an existing SSL certificate to your application
- Add a CDN to your application
Samples
Find scripts to manage common tasks. | https://docs.microsoft.com/en-us/azure/app-service/containers/ | 2018-03-17T06:37:08 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.microsoft.com |
EnableKey
Sets the state of a customer master key (CMK) to enabled, thereby permitting its use for cryptographic operations. You cannot perform this operation on a CMK in a different AWS account..EnableKey X-Amz-Date: 20161107T221800Z Content-Type: application/x-amz-json-1.1 Authorization: AWS4-HMAC-SHA256\ Credential=AKIAI44QH8DHBEXAMPLE/20161107/us-east-2/kms/aws4_request,\ SignedHeaders=content-type;host;x-amz-date;x-amz-target,\ Signature=74d02e36580c1759255dfef66f1e51f3542e469de8c7c8fa5fb21c042e518295 {"KeyId": "1234abcd-12ab-34cd-56ef-1234567890ab"}
Example Response
HTTP/1.1 200 OK Server: Server Date: Mon, 07 Nov 2016 22:18:00 GMT Content-Type: application/x-amz-json-1.1 Content-Length: 0 Connection: keep-alive x-amzn-RequestId: 0b588162-a538-11e6-b4ed-059c103e7a90
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/kms/latest/APIReference/API_EnableKey.html | 2018-03-17T06:55:31 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.aws.amazon.com |
_OPERATION is generated if
glStencilMaskSeparate is executed between the execution of glBegin and the corresponding execution of glEnd.
glGet with argument
GL_STENCIL_WRITEMASK,
GL_STENCIL_BACK_WRITEMASK, or
GL_STENCIL_BITS
glColorMask, glDepthMask, glIndexMask, glStencilFunc, glStencilFuncSeparate, glStencilMask, glStencilOp, glStencilOpSeparate
Copyright © 2006 Khronos Group. This material may be distributed subject to the terms and conditions set forth in the Open Publication License, v 1.0, 8 June 1999.. | http://docs.gl/gl2/glStencilMaskSeparate | 2018-03-17T06:07:30 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.gl |
Add Routes
From working the previous steps, we have our open source friends, the ability to have a resource-list house party, and a working demo. It's time to build our components to handle user interaction and send requests to our Hapi Plugin (for getting more friends to our party). We will use the React-Router and create separate routes to serve different content to our views.
Now we can integrate your published
<your-awesome-component>as a node module and build out the app. Make sure you are inside of Your Awesome App and follow the steps below:
$ npm i your-awesome-published-npm-module --save
You define your React routes in the file
src/client/routes.jsx. The routes definition is from react-router.
Example
Navigate to
<your-awesome-app>/src/client/routes.jsx. Copy, paste, and save the code below into this file. Change from the literal
YourAwesomeComponent and
your-awesome-node-module to your actual component name:
import React from "react"; import { Route, IndexRoute } from "react-router"; import { Home } from "./components/home"; import { YourAwesomeComponent } from "your-awesome-published-npm-module"; export const routes = ( <Route path="/" component={Home}> <IndexRoute component={YourAwesomeComponent}/> <Route path="/invite" component={YourAwesomeComponent}/> </Route> ); | https://docs.electrode.io/chapter1/intermediate/react-routes/add-routes.html | 2018-03-17T06:12:34 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.electrode.io |
Remove-SPEnterprise
Search Query Demoted
Syntax
Remove-SPEnterpriseSearchQueryDemoted [-Identity] <DemotedPipeBind> -Owner <SearchObjectOwner> [-AssignmentCollection <SPAssignmentCollection>] [-Confirm] [-SearchApplication <SearchServiceApplicationPipeBind>] [-WhatIf] [<CommonParameters>]
Description
The
Remove-SPEnterpriseSearchQueryDemoted cmdlet adjusts query rank by deleting a demoted site rule from the demoted site collection.
Query demoted sites are de-emphasized in relevance.
For permissions and the most current information about Windows PowerShell for SharePoint Products, see the online documentation at ().
Examples
------------------EXAMPLE------------------
C:\PS>$demotedRule = Get-SPEnterpriseSearchQueryDemoted -Identity -SearchApplication MySSA $demotedRule | Remove-SPEnterpriseSearchQueryDemoted
This example obtains a reference to a site demotion rule for the URL and removes it.
Required Parameters
Specifies the demoted site rule to delete.
The type must be a valid GUID, in the form 12345678-90ab-cdef-1234-567890bcdefgh; a valid URL, in the form; or an instance of a valid Demoted object.
Specifies the search object owner that defines the scope at which the corresponding Demoted object demoted | https://docs.microsoft.com/en-us/powershell/module/sharepoint-server/Remove-SPEnterpriseSearchQueryDemoted?view=sharepoint-ps | 2018-03-17T06:58:57 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.microsoft.com |
The easiest way to connect to Erle-Brain 2 is using zeroconf (which already comes preinstalled). Just connect the Ethernet cable and power up Erle-Brain 2. Then:
ping erle-brain-2.local
After a few seconds you'll see it has ping to the brain. So just ssh into it:
ssh [email protected]
the password is
holaerle.
Type the following in Erle-Brain 2 to get root access:
sudo su | http://docs.erlerobotics.com/brains/discontinued/erle-brain-2/getting_started/zeroconf | 2018-03-17T06:23:35 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.erlerobotics.com |
Layers Update 1.2.14
Enhancements
- Added the Kanit Google font to the font list.
- Limited Layers Messenger to administrators for sites, to avoid editors & subscribers from getting a Messenger popup.
- Added device width to viewport meta.
Changes & Fixes
- WooCommerce system status should now stop reporting false version warnings for Layers.
- Removed all Custom CSS being output from the_content(); in Layers Pages – this is a technical change that should not stop custom css from working!
- Contact Widget no longer produces a Google Maps API error
- Full width footer no longer touches sides, padding left-right has been added
- Image-Bottom icon in Image Layout options of widget controls has been restored to its previous glory
- CSS Class hiding customizer panel titles.
Update Layers View on Github | http://docs.layerswp.com/layers-update-1-2-14/ | 2018-03-17T06:22:27 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.layerswp.com |
Senate
Record of Committee Proceedings
Committee on Elections and Local Government
Senate Bill 466
Relating to: authorizing certain libraries to notify collection agencies and law enforcement agencies of delinquent accounts.
By Senators Harsdorf, Gudex, Kapenga, Moulton, Stroebel, Wanggaard, Ringhand and Bewley; cosponsored by Representatives VanderMeer, Allen, Bernier, E. Brooks, Czaja, Edming, Jacque, Krug, Kulp, Loudenbeck, Murphy, Mursau, Nerison, A. Ott, Petryk, Ripp, Rohrkaste, Swearingen, Tittl and Kahl.
December 18, 2015 Referred to Committee on Elections and Local Government
January 14, 2016 Public Hearing Held
Present: (5) Senator LeMahieu; Senators Kapenga, Wanggaard, Risser and Miller.
Absent: (0) None.
Excused: (0) None.
Appearances For
· Representative Nancy VanderMeer - 70th Assembly District
· Senator Shiela Harsdorf - 10th Senate District
· Shannon Schultz - WI Library Association/ Portage Public Library
· Kathy Klager - Pauline Haass Public Library/ WLA
· Heather Johnson - River Falls Public Library/ WLA
· Connie Meyer - Waukesha County Bridges Library System
Appearances Against
· None.
Appearances for Information Only
· None.
Registrations For
· Nick Dimassis - Beloit Public Library/ WLA
· Steven Conway - W.L.A
· Plumer Lovelace - WLA
· Peg Checkai - Watertown PUblic Library/ WLA
Registrations Against
· None.
Registrations for Information Only
· None.
January 26, 2016 Executive Session Held
Present: (5) Senator LeMahieu; Senators Kapenga, Wanggaard, Risser and Miller.
Absent: (0) None.
Excused: (0) None.
Moved by Senator Wanggaard, seconded by Senator Miller that Senate Bill 466 be recommended for passage.
Ayes: (5) Senator LeMahieu; Senators Kapenga, Wanggaard, Risser and Miller.
Noes: (0) None.
PASSAGE RECOMMENDED, Ayes 5, Noes 0
______________________________
Luke Petrovich
Committee Clerk | http://docs.legis.wisconsin.gov/2015/related/records/senate/elections_and_local_government/1224860 | 2018-03-17T06:30:04 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.legis.wisconsin.gov |
.
Klarnas checkout solution
Our complete checkout is a dynamic and simple solution that identifies the customer, requires only top of mind information and offers all popular payment methods in the market. This smoooth user experience results in increased average order value, conversion rates and amount of return customers on all devices.
Are you a Klarna-merchant looking for the new Klarna brand-assets? You checkout is automatically updated. For assets in other consumer touch points. Click here. | https://docs.klarna.com/en/se/kco-v2 | 2018-03-17T06:06:09 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.klarna.com |
Scan time limit
You can set a limit for how long Wordfence scans will run on your site. Some options combined with a large number of files can make scans take a long time, especially on slower servers. Leaving this option blank will allow Wordfence to use the default limit. If your site reaches this limit during a scan, you will see a message in the scan results like:
Scan terminated with error: The scan time limit of 10800 seconds has been exceeded and the scan will be terminated. This limit can be customized on the options page.
If this happens, then the scan stops and reports the issues it has found so far, but the remainder of the scan will not be able to run unless you make some changes to scan options or the site's files.
Resolving the issue
You can adjust some options to help scans complete more quickly, look for reasons that might cause the long scans, or increase the time limit, as described below.
Scan images, binary, and other files as if they were executable
This option can be disabled if you have many non-PHP files being scanned. This option is off by default, but you may have enabled it on your site.
Exclude files from scan that match these wildcard patterns
You can add files, directories, or patterns to the "Exclude files from scan that match these wildcard patterns" box on the options page, to prevent them from being scanned. This can be useful if you keep large files within your site's folders, such as backups.
Additionally, we recommend saving backups somewhere other than in your site's own folders. In addition to saving time in scans, if the the host had a major problem with the server and the whole site was lost, it is best to have your backups stored somewhere else.
Scan files outside your WordPress installation
If you have "Scan files outside your WordPress installation" enabled, you can disable it to scan fewer files. If you have additional non-WordPress applications installed, or additional sites in subdirectories of the main site (such as on some shared hosting plans), they will not be scanned if this option is disabled. If the additional sites also run Wordfence, their scans will still run normally.
Error logs
Check the error logs generated by your site. It's possible that a conflict with another plugin, a database issue, or settings on the server may be interfering with the scans, causing them to take longer than they should.
Time limit that a scan can run in seconds
You can set this option to a longer duration, if you want scans to run for a longer time. Many hosts have limits on resource usage, especially on shared hosting plans, so it is generally best to reduce usage rather than increasing the limit.
See also: Time limit that a scan can run in seconds (Wordfence options) | https://docs.wordfence.com/index.php?title=Scan_time_limit&oldid=751 | 2018-03-17T06:08:50 | CC-MAIN-2018-13 | 1521257644701.7 | [] | docs.wordfence.com |
Integration with HP Application Lifecycle Management (ALM)
TestPlant enables you to incorporate eggPlant Functional into the HP ALM tool with the eggIntegration for HP ALM. This eggIntegration is a web application that uploads your tests and results to the HP ALM server. You can run eggPlant Functional tests from either eggPlant Functional or HP ALM, and the eggIntegration automatically uploads those tests and results to HP ALM. This integration gives you the benefits of being able to store and view your tests and results in one convenient location—HP ALM.
Note: You can also run eggPlant Functional scripts in HP ALM without the eggIntegration for HP ALM. However, the test results are not automatically uploaded to HP ALM. See Other HP ALM Integration Methods for information about using HP ALM to run scripts.
Below is a video demonstration of eggPlant Functional integrated with HP ALM, followed by a diagram of how eggIntegration for HP ALM works. For more information about the integration, see Integrating eggPlant Functional with HP ALM using eggIntegration for HP ALM, which also provides links to installation and configuration information for eggIntegration for HP ALM.
Video of eggPlant Functional Integrated with HP ALM
Below is a video demonstration of eggPlant Functional integrated with HP ALM.
eggIntegration for HP ALM Workflow
The diagram below shows how eggIntegration for HP ALM works.
- With eggIntegration for HP ALM, testers can run tests in either of the following ways:
- By logging in to HP ALM and starting a test run. In this case, HP ALM starts eggPlant Functional via the command line.
- By running eggPlant Functional and starting a test run.
Note: For information about HP's Open Test Architecture (OTA), refer to the HP documentation.
This topic was last updated on January 04, 2016, at 02:00:21 PM. | http://docs.testplant.com/more/eggIntegration/int-hp-alm-integration.htm | 2018-03-17T06:13:11 | CC-MAIN-2018-13 | 1521257644701.7 | [array(['../../Resources/Images/int-hp-alm-diagram-2_0.png',
'eggIntegration for HP ALM operational diagram'], dtype=object)] | docs.testplant.com |
Disable or Enable a Job disable a SQL Server Agent job in SQL Server 2017 by using SQL Server Management Studio or Transact-SQL. When you disable a job, it is not deleted and can be enabled again when necessary.
Before You Begin
Security
For detailed information, see Implement SQL Server Agent Security.
Using SQL Server Management Studio.
Using Transact-SQL
To disable or enable a job
In Object Explorer, connect to an instance of Database Engine.
On the Standard bar, click New Query.
Copy and paste the following example into the query window and click Execute.
-- changes the name, description, and disables status of the job NightlyBackups. USE msdb ; GO EXEC dbo.sp_update_job @job_name = N'NightlyBackups', @new_name = N'NightlyBackups -- Disabled', @description = N'Nightly backups disabled during server migration.', @enabled = 0 ; GO
For more information, see sp_update_job (Transact-SQL).
Feedback | https://docs.microsoft.com/en-us/sql/ssms/agent/disable-or-enable-a-job?redirectedfrom=MSDN&view=sql-server-2017 | 2019-09-15T10:27:20 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.microsoft.com |
uuid, auto-generated, read-only
Internal id of procedure.
string, auto-generated, read-only
The auction identifier to refer to in “paper” documentation.
OpenContracting Description:
It is included to make the flattened data structure more convenient.
date, auto-generated, read-only
The date of the procedure creation/undoing.
The entity whom the procedure has been created by.
string, read-only
Originates from lot.id
The identifier of a lot, which is to be privatized, within the Registry.
string, read-only
Ukrainian by default (required) - Ukrainian title
title_en (English) - English title
title_ru (Russian) - Russian title
Oprionally can be mentioned in English/Russian.
title_en
title_ru
Oprionally can be mentioned in English/Russian.
Originates from lot.title.
The name of the auction, displayed in listings.
string, read-only
OpenContracting Description:
A description of the goods, services to be provided.
Ukrainian by default - Ukrainian decription
decription_en (English) - English decription
decription_ru (Russian) - Russian decription
OpenContracting Description:
A description of the goods, services to be provided.
decription_en
decription_ru
Originates from lot.description.
integer, read-only
Originates from auction.tenderAttempts.
The number which represents what time procedure with a current lot takes place.
integer, auto-generated, read-only
Number of submitted bids for the process to become successful. The default value is 1.
Purchase method. The only value is “open”.
Originates from auction.procurementMethodType.
Type of the procedure within the auction announcement. The given value is sellout.english.
Originates from auction.procurementMethodDetails.
Parameter that accelerates auction periods. Set quick, accelerator=1440 as text value for procurementMethodDetails for the time frames to be reduced in 1440 times.
The given value is electronicAuction.
Originates from auction.submissionMethodDetails.
Parameter that works only with mode = “test” and speeds up auction start date.
ProcuringEntity (Organizer), read-only
Originates from lot.lotCustodian.
Organization conducting the auction.
OpenContracting Description:
The entity managing the procurement, which may be different from the buyer who is paying / using the items being procured.
Auction Parameters, read-only
Originates from auction.auctionParameters.
The parameters that indicates the major specifications of the procedure.
Contract Terms, read-only
Originates from lot.items.
The parameters that indicates the major specifications of the contract.
Value, read-only
Originates from auction.value.
Total available budget of the 1st auction. Bids lower than value will be rejected.
value
OpenContracting Description:
The total estimated value of the procurement.
Originates from auction.minimalStep.
Auction step (increment).
Guarantee, read-only
Originates from auction.guarantee.
The assumption of responsibility for payment of performance of some obligation if the liable party fails to perform to expectations.
Originates from auction.registrationFee.
The sum of money required to enroll on an official register.
Bank Account, read-only
Originates from auction.bankAccount.
Details which uniquely identify a bank account, and are used when making or receiving a payment.
Array of Item objects, read-only
List that contains single item being sold.
OpenContracting Description:
The goods and services to be purchased, broken into line items wherever possible. Items should not be duplicated, but a quantity of 2 specified instead.
Array of Documents objects, optional
OpenContracting Description:
All documents and attachments related to the auction.
Date, auto-generated
OpenContracting Description:
Date when the auction was last modified
Array of Question objects, optional
Questions to procuringEntity and answers to them.
Array of Bid objects, optional (required for the process to be succsessful)
A list of all bids placed in the auction with information about participants, their proposals and other qualification documentation.
OpenContracting Description:
A list of all the companies who entered submissions for the auction.
Array of Award objects
All qualifications (disqualifications and awards).
The given value is highestCost.
Array of Contract objects
OpenContracting Description:
Information on contracts signed as part of a process
Array of Cancellation objects, optional
Contains 1 object with active status in case of cancelled Auction.
The Cancellation object describes the reason of auction cancellation and contains accompanying
documents if there are any.
url, auto-generated, read-only
A web address where auction is accessible for view.
string, required
Period, auto-generated, read-only when Auction is conducted. startDate originates from auction.auctionPeriod.startDate.
Awarding process period.
OpenContracting Description:
The date or period on which an award is anticipated to be made.
string, optional
The additional parameter with a value test.
Type of the auction.
string, multilingual, optional
Additional information that has to be noted from the Organizer point.
Name of the bank.
Array of Classification, required
Major data on the account details of the state entity selling a lot, to facilitate payments at the end of the process.
Most frequently used are:
Type of the contract. The only value is yoke. | http://sellout-english.api-docs.ea2.openprocurement.io/en/latest/standard/auction.html | 2019-09-15T09:56:37 | CC-MAIN-2019-39 | 1568514571027.62 | [] | sellout-english.api-docs.ea2.openprocurement.io |
Try it now and let us know what you think. Switch to the new look >>
You can return to the original look by selecting English in the language selector above.
Create a Standard Amazon Machine Image Using Sysprep
The Microsoft System Preparation (Sysprep) tool simplifies the process of duplicating a customized installation of Windows. We recommend that you use Sysprep to create a standardized Amazon Machine Image (AMI). You can then create new Amazon EC2 instances for Windows from this standardized image.
We also recommend that you run Sysprep with EC2Launch (Windows Server 2016 and later) or the EC2Config service (prior to Windows Server 2016).
Important
Don't use Sysprep to create an instance backup. Sysprep removes system-specific information; removing this information might have unintended consequences for an instance backup.
Contents
Before You Begin
Before performing Sysprep, we recommend that you remove all local user accounts and all account profiles other than a single administrator account under which Sysprep will be executed. If you perform Sysprep with additional accounts and profiles, unexpected behavior could result, including loss of profile data or failure to complete Sysprep.
Learn more about Sysprep on Microsoft TechNet.
Learn which server roles are supported for Sysprep.
The procedures on this page apply to E2Config. With Windows Server 2016 and later, see Using Sysprep with EC2Launch.
Using Sysprep with the EC2Config Service
Learn the details of the different Sysprep execution phases and the tasks performed by the EC2Config service as the image is prepared.
Sysprep Phases
Sysprep runs through the following phases:
Generalize: The tool removes image-specific information and configurations. For example, Sysprep removes the security identifier (SID), the computer name, the event logs, and specific drivers, to name a few. After this phase is completed, the operating system (OS) is ready to create an AMI.
Note
When you run Sysprep with the EC2Config service, the system prevents drivers from being removed because the PersistAllDeviceInstalls setting is set to true by default.
Specialize: Plug and Play scans the computer and installs drivers for any detected devices. The tool generates OS requirements like the computer name and SID. Optionally, you can execute commands in this phase.
Out-of-Box Experience (OOBE): The system runs an abbreviated version of Windows Setup and asks the user to enter information such as a system language, the time zone, and a registered organization. When you run Sysprep with EC2Config, the answer file automates this phase.
Sysprep Actions
Sysprep and the EC2Config service perform the following actions when preparing an image.
When you choose Shutdown with Sysprep in the EC2 Service Properties dialog box, the system runs the ec2config.exe –sysprep command.
The EC2Config service reads the content of the
BundleConfig.xmlfile. This file is located in the following directory, by default:
C:\Program Files\Amazon\Ec2ConfigService\Settings.
The BundleConfig.xml file includes the following settings. You can change these settings:
AutoSysprep: Indicates whether to use Sysprep automatically. You do not need to change this value if you are running Sysprep from the EC2 Service Properties dialog box. The default value is No.
SetRDPCertificate: Sets a self-signed certificate for the Remote Desktop server. This enables you to securely use the Remote Desktop Protocol (RDP) to connect to the instance. Change the value to Yes if new instances should use a certificate. This setting is not used with Windows Server 2008 or Windows Server 2012 instances because these operating systems can generate their own certificates. The default value is No.
SetPasswordAfterSysprep: Sets a random password on a newly launched instance, encrypts it with the user launch key, and outputs the encrypted password to the console. Change the value to No if new instances should not be set to a random encrypted password. The default value is Yes.
PreSysprepRunCmd: The location of the command to run. The command is located in the following directory, by default: C:\Program Files\Amazon\Ec2ConfigService\Scripts\BeforeSysprep.cmd
The system executes
BeforeSysprep.cmd. This command creates a registry key as follows:
reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server" /v fDenyTSConnections /t REG_DWORD /d 1 /f
The registry key disables RDP connections until they are re-enabled. Disabling RDP connections is a necessary security measure because, during the first boot session after Sysprep has run, there is a short period of time where RDP allows connections and the Administrator password is blank.
The EC2Config service calls Sysprep by running the following command:
sysprep.exe /unattend: "C:\Program Files\Amazon\Ec2ConfigService\sysprep2008.xml" /oobe /generalize /shutdown
Generalize Phase
The tool removes image-specific information and configurations such as the computer name and the SID. If the instance is a member of a domain, it is removed from the domain. The
sysprep2008.xmlanswer file includes the following settings which affect this phase:
PersistAllDeviceInstalls: This setting prevents Windows Setup from removing and reconfiguring devices, which speeds up the image preparation process because Amazon AMIs require certain drivers to run and re-detection of those drivers would take time.
DoNotCleanUpNonPresentDevices: This setting retains Plug and Play information for devices that are not currently present.
Sysprep shuts down the OS as it prepares to create the AMI. The system either launches a new instance or starts the original instance.
Specialize Phase
The system generates OS specific requirements such as a computer name and a SID. The system also performs the following actions based on configurations that you specify in the sysprep2008.xml answer file.
CopyProfile: Sysprep can be configured to delete all user profiles, including the built-in Administrator profile. This setting retains the built-in Administrator account so that any customizations you made to that account are carried over to the new image. The default value is True.
CopyProfile replaces the default profile with the existing local administrator profile. All accounts logged into after running Sysprep will receive a copy of that profile and its contents at first login.
If you don’t have specific user-profile customizations that you want to carry over to the new image then change this setting to False. Sysprep will remove all user profiles; this saves time and disk space.
TimeZone: The time zone is set to Coordinate Universal Time (UTC) by default.
Synchronous command with order 1: The system executes the following command that enables the administrator account and specifies the password requirement.
net user Administrator /ACTIVE:YES /LOGONPASSWORDCHG:NO /EXPIRES:NEVER /PASSWORDREQ:YES
Synchronous command with order 2: The system scrambles the administrator password. This security measure is designed to prevent the instance from being accessible after Sysprep completes if you did not enable the ec2setpassword setting.
C:\Program Files\Amazon\Ec2ConfigService\ScramblePassword.exe" -u Administrator
Synchronous command with order 3: The system executes the following command:
C:\Program Files\Amazon\Ec2ConfigService\Scripts\SysprepSpecializePhase.cmd
This command adds the following registry key, which re-enables RDP:
reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server" /v fDenyTSConnections /t REG_DWORD /d 0 /f
OOBE Phase
Using the EC2Config service answer file, the system specifies the following configurations:
<InputLocale>en-US</InputLocale>
<SystemLocale>en-US</SystemLocale>
<UILanguage>en-US</UILanguage>
<UserLocale>en-US</UserLocale>
<HideEULAPage>true</HideEULAPage>
<HideWirelessSetupInOOBE>true</HideWirelessSetupInOOBE>
<NetworkLocation>Other</NetworkLocation>
<ProtectYourPC>3</ProtectYourPC>
<BluetoothTaskbarIconEnabled>false</BluetoothTaskbarIconEnabled>
<TimeZone>UTC</TimeZone>
<RegisteredOrganization>Amazon.com</RegisteredOrganization>
<RegisteredOwner>Amazon</RegisteredOwner>
Note
During the generalize and specialize phases the EC2Config service monitors the status of of the OS. If EC2Config detects that the OS is in a Sysprep phase, then it publishes the following message the system log:
EC2ConfigMonitorState: 0 Windows is being configured. SysprepState=IMAGE_STATE_UNDEPLOYABLE
After the OOBE phase completes, the system executes the SetupComplete.cmd from the following location: C:\Windows\Setup\Scripts\SetupComplete.cmd. In Amazon public AMIs before April 2015 this file was empty and executed nothing on the image. In public AMIs dated after April 2015, the file includes the following value: call "C:\Program Files\Amazon\Ec2ConfigService\Scripts\PostSysprep.cmd".
The system executes the PostSysprep.cmd, which performs the following operations:
Sets the local Administrator password to not expire. If the password expired, Administrators might not be able to log on.
Sets the MSSQLServer machine name (if installed) so that the name will be in sync with the AMI.
Post Sysprep
After Sysprep completes, the EC2Config services sends the following message to the console output:
Windows sysprep configuration complete. Message: Sysprep Start Message: Sysprep End
EC2Config then performs the following actions:
Reads the content of the config.xml file and lists all enabled plug-ins.
Executes all “Before Windows is ready” plug-ins at the same time.
Ec2SetPassword
Ec2SetComputerName
Ec2InitializeDrives
Ec2EventLog
Ec2ConfigureRDP
Ec2OutputRDPCert
Ec2SetDriveLetter
Ec2WindowsActivate
Ec2DynamicBootVolumeSize
After it is finished, sends a “Windows is ready” message to the instance system logs.
Runs all “After Windows is ready” plug-ins at the same time.
AWS CloudWatch logs
UserData
AWS Systems Manager (Systems Manager)
For more information about Windows plug-ins, see Configuring a Windows Instance Using the EC2Config Service.
Run Sysprep with the EC2Config Service
Use the following procedure to create a standardized AMI using Sysprep and the EC2Config service.
In the Amazon EC2 console locate or create an AMI that you want to duplicate.
Launch and connect to your Windows instance.
Customize it.
Specify configuration settings in the EC2Config service answer file:
C:\Program Files\Amazon\Ec2ConfigService\sysprep2008.xml
From the Windows Start menu, choose All Programs, and then choose EC2ConfigService Settings.
Choose the Image tab in the Ec2 Service Properties dialog box. For more information about the options and settings in the Ec2 Service Properties dialog box, see Ec2 Service Properties.
Select an option for the Administrator password, and then select Shutdown with Sysprep or Shutdown without Sysprep. EC2Config edits the settings files based on the password option that you selected.
Random: EC2Config generates a password, encrypts it with user's key, and displays the encrypted password to the console. We disable this setting after the first launch so that this password persists if the instance is rebooted or stopped and started.
Specify: The password is stored in the Sysprep answer file in unencrypted form (clear text). When Sysprep runs next, it sets the Administrator password. If you shut down now, the password is set immediately. When the service starts again, the Administrator password is removed. It's important to remember this password, as you can't retrieve it later.
Keep Existing: The existing password for the Administrator account doesn't change when Sysprep is run or EC2Config is restarted. It's important to remember this password, as you can't retrieve it later.
Choose OK.
When you are asked to confirm that you want to run Sysprep and shut down the
instance, click Yes. You'll notice that EC2Config runs
Sysprep. Next, you are logged off the instance, and the instance is shut down.
If you check the Instances page in the Amazon EC2 console, the
instance state changes from
running to
stopping, and
then finally to
stopped. At this point, it's safe to create an AMI
from this instance.
You can manually invoke the Sysprep tool from the command line using the following command:
"%programfiles%\amazon\ec2configservice\"ec2config.exe -sysprep""
Note
The double quotation marks in the command are not required if your CMD shell is already in the C:\Program Files\Amazon\EC2ConfigService\ directory.
However, you must be very careful that the XML file options specified in the
Ec2ConfigService\Settings folder are correct;
otherwise, you might not be able to connect to the instance. For more
information about the settings files, see EC2Config Settings Files. For an example of configuring and
then running Sysprep from the command line, see
Ec2ConfigService\Scripts\InstallUpdates.ps1.
Troubleshooting Sysprep
If you experience problems or receive error messages during image preparations, review the following logs:
%WINDIR%\Panther\Unattendgc
%WINDIR%\System32\Sysprep\Panther
"C:\Program Files\Amazon\Ec2ConfigService\Logs\Ec2ConfigLog.txt"
If you receive an error message during image preparation with Sysprep, the OS might not be reachable. To review the log files, you must stop the instance, attach its root volume to another healthy instance as a secondary volume, and then review the logs mentioned earlier on the secondary volume.
If you locate errors in the Unattendgc log file, use the Microsoft Error Lookup Tool to get more details about the error. The following issue reported in the Unattendgc log file is typically the result of one or more corrupted user profiles on the instance:
Error [Shell Unattend] _FindLatestProfile failed (0x80070003) [gle=0x00000003] Error [Shell Unattend] CopyProfile failed (0x80070003) [gle=0x00000003]
There are two options for resolving this issue:
Option 1: Use Regedit on the instance to search for the following key. Verify that there are no profile registry keys for a deleted user:
[HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\ProfileList\
Option 2: Edit the EC2Config answer file (
C:\Program
Files\Amazon\Ec2ConfigService\sysprep2008.xml) and change
<CopyProfile>true</CopyProfile> to
<CopyProfile>false</CopyProfile>. Run Sysprep again. Note that this
configuration change will delete the built-in administrator user profile after
Sysprep
completes. | https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ami-create-standard.html | 2019-09-15T10:22:46 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.aws.amazon.com |
CloudLinux OS kernelCloudLinux OS kernel
Hybrid KernelsHybrid Kernels
CloudLinux 6 Hybrid kernel
CloudLinux 6 Hybrid Kernel is CloudLinux 7 (3.10.0) kernel compiled for CloudLinux 6 OS. New 3.10 kernel features a set of performance and scalability improvements related to IO , networking and memory management, available in CloudLinux 7 OS . It also features improved CPU scheduler for better overall system throughput and latency.
Please find information on the main features of 3.10 kernel branch on the links:
CloudLinux 7 Hybrid kernel
CloudLinux 7 Hybrid Kernel is essentially EL8-based (4.18) kernel compiled for CloudLinux OS 7.
You can find more information on 4.18 kernel branch using this link:
How to migrate from the normal to hybrid channel (CL6h):
Note
The system must be registered in CLN.
Update rhn-client-tools from production
Run normal-to-hybrid script.
Reboot after script execution is completed.
yum update rhn-client-tools normal-to-hybrid reboot
How to migrate from the normal to hybrid channel (CL7h):
Note
The system must be registered in CLN.
Update rhn-client-tools rhn-check rhn-setup from testing repository
Run normal-to-hybrid script.
Reboot after script execution is completed.
yum update rhn-client-tools rhn-check rhn-setup --enablerepo=cloudlinux-updates-testing normal-to-hybrid reboot
How to migrate from hybrid to the normal channel (for both CL6h and CL7h):
Note
The system should be registered in CLN.
Run hybrid-to-normal script.
Reboot after script execution is completed.
hybrid-to-normal reboot
Known limitations and issues of CloudLinux 6 Hybrid kernel :
We do not remove Hybrid kernel after migration from Hybrid to the normal channel, but we remove linux-firmware package which is needed to boot Hybrid kernel. This is because CloudLinux 6 does not allow to remove the package of currently running kernel. Proper removal procedure will be implemented, but for now, we should warn users not to boot Hybrid kernel if they have migrated to normal channel .
Kernel module signature isn't checking for now, as 3.10 kernel is using x509 certificates to generate keys and CL6 cannot detect signatures created in such way. The solution will be implemented.
Known limitations and issues of CloudLinux 7 Hybrid kernel
Features that are absent in the current kernel build:
- Per LVE traffic accounting
Limitations of the current kernel build:
- Native OOM killer is used
- Native СPU boost is used
- The
/etc/sysctl.confparameter
proc_can_see_other_uidis the same as in CloudLinux 7. See documentation.
Note, that Symlink Owner Match Protection is enabled by default in CL7 Hybrid kernel. To disable it, use
sysctl utility:
sysctl -w fs.enforce_symlinksifowner=0
Find more details on symlink owner match protection.
SecureLinksSecureLinks
CloudLinux provides comprehensive protection against symbolic link attacks popular in shared hosting environment.
The protection requires setting multiple kernel options to be enabled.
Symlink owner match protectionSymlink owner match protection
fs.enforce_symlinksifowner
To protect against symlink attack where attacker tricks Apache web server to read some other user PHP config files, or other sensitive file, enable:
fs.enforce_symlinksifowner=1
Setting this option will deny any process running under gid fs.symlinkown_gid to follow the symlink if owner of the link doesn’t match the owner of the target file.
Defaults:
fs.enforce_symlinksifowner = 1 fs.symlinkown_gid = 48
When fs.enforce_symlinksifowner set to 1, processes with GID 48 will not be able to follow symlinks if they are owned by user1 , but point to file owned user2 .
Please, note that fs.enforce_symlinksifowner = 2 is deprecated and can cause issues for the system operation.
fs.symlinkown_gid
On standard RPM Apache installation, Apache is usually running under GID 48. On cPanel servers, Apache is running under user nobody, GID 99.
To change GID of processes that cannot follow symlink , edit the file /etc/sysctl.conf , add the line:
fs.symlinkown_gid = XX
$ sysctl -p
To disable symlink owner match protection feature, set fs.enforce_symlinksifowner = 0 in /etc/sysctl.conf , and execute
$ sysctl -p
WARNING
/proc/sys/fs/global_root_enable [CloudLinux 7 kernel only] [applicable for kernels 3.10.0-427.36.1.lve1.4.42+]
proc/sys/fs/global_root_enable flag enables following the symlink with root ownership. If global_root_enable=0 , then Symlink Owner Match Protection does not verify the symlink owned by root.
For example, in the path /proc/self/fd , self is a symlink , which leads to a process directory. The symlink owner is root . When global_root_enable=0 , Symlink Owner Match Protection excludes this element from the verification. When global_root_enable=1 , the verification will be performed, which could block the access to fd and cause violation of web site performance.
It is recommended to set /proc/sys/fs/global_root_enable=0 by default. If needed, set /proc/sys/fs/global_root_enable=1 to increase the level of protection.
Note
Starting from lve-utils 3.0-21.2, fs.symlinkown_gid parameter values for httpd service user and fs.proc_super_gid for nagios service user is written to /etc/sysctl.d/90-cloudlinux.conf.
Link traversal protectionLink traversal protection well as some FTP servers don’t include proper change rooting.
This allows an attacker to create symlink or hardlink to a sensitive file like /etc/passwd and then use WebDAV , filemanager, or webmail to read the content of that file.
Starting with CL6 kernel 2.6.32-604.16.2.lve1.3.45, you can prevent such attacks by preventing user from creating symlinks and hardlinks to files that they don’t own.
This is done by set following kernel options to 1:
fs.protected_symlinks_create = 1 fs.protected_hardlinks_create = 1
WARNING
We do not recommend to use protected_symlinks option for cPanel users as it might break some of the cPanel functionality.
Note
Link Traversal Protection is disabled by default for the new CloudLinux OS installations/convertations.
fs.protected_symlinks_create = 0 fs.protected_hardlinks_create = 0
fs.protected_symlinks_allow_gid = id_of_group_linksafe fs.protected_hardlinks_allow_gid = id_of_group_linksafe
To manually adjust the settings, edit: /etc/sysctl.d/cloudlinux-linksafe.conf and execute:
sysctl -p /etc/sysctl.d/cloudlinux-linksafe.conf
sysctl --system
Note
Starting from lvemanager 4.0-25.5, if there is no /etc/sysctl.d/cloudlinux-linksafe.conf config file, selectorctl for PHP with --setup-without-cagefs and --revert-to-cagefs keys writes fs.protected_symlinks_create and fs.protected_hardlinks_create parameters to /etc/sysctl.d/90-cloudlinux.conf.
File change APIFile change API
GeneralGeneral
General description
One of the main problems on a shared hosting system for file backup operations is to figure out which files have changed. Using INOTIFY on a 1T drive with a large number of small files and directories guarantees slow startup times, and a lot of context switching between kernel and userspace - generating additional load. On the other hand scanning disk for newly modified files is very IO intensive, and can kill the performance of the fastest disks.
CloudLinux approach
CloudLinux File Change API is a kernel level technology with the user space interface that buffers lists of modified files in the kernel and then off-loads that list to user space daemon.
After that - any software (with enough permissions) can get a list of files that has been modified for the last 24 hours.
The software is very simple to use and produces the list of modified files. As such we expect file backup software, including integrated cPanel backup system to integrate with this API soon.
Usage and integrationUsage and integration
Userland utilities
/usr/bin/cloudlinux-backup-helper is a utility for getting the list of changed files.
It is supposed to be run by a super user only.
Command line parameters:
-t | --timestamp retrieve file names for files modified after specified timestamp -u | --uid retrieve file names for particular UID only
Output format
protocol version (1 right now), timestamp (in seconds) - up to which time data was collected UID:absolute path to file changed UID:absolute path to file changed …
Note
The timestamp in output is needed so you can clearly identify from which timestamp to get list of changed files next.
Examples:
[root@localhost ~]# cloudlinux-backup-helper -t 1495533489 -u <UID> 1,1495533925 1001:/home/user2/public_html/output.txt 1001:/home/user2/public_html/info.php [root@localhost ~]# cloudlinux-backup-helper -t 1495533489 1,1495533925 1000:/home/user1/.bashrc 1001:/home/user2/public_html/output.txt 1001:/home/user2/public_html/info.php 1003:/home/user3/logs/data.log
/usr/bin/cloudlinux-backup-helper-uid is a SUID wrapper for the cloudlinux-backup-helper utility that enables an end user to get the list of files changed. It accepts timestamp argument only and retrieves data of the user who is running it only.
Examples:
[user@localhost ~]$ cloudlinux-backup-helper-uid 1,1495530576 1000:/home/user/.bash_history [user@localhost ~]$ cloudlinux-backup-helper-uid -t 1495547922 1,1495548343 1000:/home/user/file1.txt 1000:/home/user/file2.txt
Installation and configurationInstallation and configuration
cloudlinux-fchange-0.1-5
Requirements
CloudLinux OS 6 (requires Hybrid kernel) or 7 Kernel Version: 3.10.0-427.36.1.lve1.4.47
Installation and configuration
To install cloudlinux-fchange system run:
CloudLinux 7:
yum install cloudlinux-fchange --enablerepo=cloudlinux-updates-testing
CloudLinux 6 Hybrid:
yum install cloudlinux-fchange --enablerepo=cloudlinux-hybrid-testing
Database containing list of modified files is located at /var/lve/cloudlinux-fchange.db by default.
Starting and stopping
After successful installation the event collecting daemon starts automatically, providing all kernel-exposed data are in place.
To start daemon: CloudLinux 7:
systemctl start cloudlinux-file-change-collector
CloudLinux 6 Hybrid:
service cloudlinux-file-change-collector start
systemctl stop cloudlinux-file-change-collector
CloudLinux 6 Hybrid:
service cloudlinux-file-change-collector stop
To uninstall cloudlinux-fchange run:
yum remove cloudlinux-fchange
Configuration detailsConfiguration details
Configuration resides in /etc/sysconfig/cloudlinux-fchange. The following is the default configuration (see comments):
# sqlite database file path. If commented out a default value is used #database_path=/var/lve/cloudlinux-fchange.db # If uncommented paths starting with 'include' one are processed only # Pay attention this parameter is a regular string, not a regex # To include more than one item just specify several lines to include: # include=/one # include=/two # If uncommented exclude paths which contain 'exclude' # Pay attention this parameter is a regular string, not a regex # To exclude more than one item just specify several lines to exclude: # exclude=var # exclude=tmp # Daemon polling interval in seconds polling_interval=5 # Time to keep entries in days. Does not clean if commented out or zero time_to_keep=1 # User read-only mode minimal UID # If file change collector stopped, all users with UID >= user_ro_mode_min_uid # are restricted to write to their home directory. This prevents to miss # a file change event. # Value of -1 (default) allows to disable the feature user_ro_mode_min_uid=-1 # Minimal UID of events to be processed. # Events of users with UID less then specified are not handled. # By default 500 (non-system users for redhat-based systems) #minimal_event_uid=500 # SQLite shared lock prevents setting more restrictive locks. That is a # process cannot write to a database table when a concurrent process reads # from the table. As saving data to database is considered far more important # than getting them (data could be reread a second later after all), database # writer could try to terminate concurrent reading processes. Just set # terminate rivals to 'yes' to turn this ability on. # terminate_rivals=no # Events to be handled. Currently the following types of events are processed: # 1. file creation # 2. file deletion # 3. directory creation # 4. directory deletion # 5. file content/metadata modification # 6. file/directory attributes/ownership modification # 7. hardlink creation # 8. symlink creation # 9. file/directory moving/renaming # By default all events are processed. Keep in mind that events for a filepath # are cached, i.e if a file was deleted and then a file with the same absolute # name is created, only the deletion event is triggerred. Changing file # modification timestamp with command 'touch' will trigger modification event # as if a file content is modified. # Currently supported options are: # file_created, file_modified, file_deleted, dir_created, dir_deleted, # owner_changed, attrib_changed, moved, hardlink_created, symlink_created, all # Options that don't have 'file' or 'dir' prefix, applied to both files and # directories. To set more than one options, separate them with commas, # e.g. event_types=file_created,file_deleted,file_modified. Unknown options are # ignored. # # event_types=all
Note
Please keep in mind, that current implementation implies that one process is writing to a database and another is reading from it. As reading sets shared lock to a database table, the writing process cannot write to the table until the lock is released. That’s why passing a timestamp to cloudlinux-backup-helpermatters: this way the number of records to be returned is substantially decreased, lowering the processing time and filtering out old records. Likewise, pay attention to narrowing the scope of events being recorded. Chances are that changing attributes, ownership, directory creation/deletion, symlink events are not relevant and there’s no need to keep them.
Low-level accessLow-level access
Note
Using this options is dangerous, and might cause problems with CloudLinux File Change API.
The kernel exposes the functionality to folder.
enable - enable/disable the functionality. Write 1 to this file to enable, 0 to disable. If disabled, no events are coming to events file.
events - the modified files log itself. Events in the format <EVENT_ID>:<EVENT_TYPE_ID>:<USER_ID>:<FILE_PATH> are constantly appending to the end of the file if datacycle enabled. File events are never duplicated: if we have file modification event, we would not get file deletion event if the file has been later deleted. This events buffer has limited capacity, therefore from time to time, the events log requires flushing.
flush - a file for clearing events log. For flushing, the last event_id from the events file is written to this file. Right after this, events log is truncated to that event_id .
user_ro_mode - forbidding users with UIDs equal or bigger that set in this file writing to their home directories. At the boot, the file has -1. When it’s written positive value, say 500, the system starts effectively preventing users from modifying their home dirs (on write attempt a user gets ‘read-only filesystem’ error). This feature is designed to prevent users from updating their home dirs when events are not handled.
entries_in_buffer - just counter of log entries of events file.
min_event_uid - this file has minimal UID of events to be handled. Events from users with smaller UID are not handled. By default 500 (non-system users in redhat-based systems).
Tuned-profiles-cloudlinuxTuned-profiles-cloudlinux
The tuned-profiles-cloudlinux package brings a range of kernel under-the-hood tunings to address high LA, iowait issues what were detected earlier on particular users deploys. The package also encloses OOM adjustments to prioritize the elimination of overrun PHP, lsphp, Phusion Passenger workers processes over other processes (e.g. ssh, a cron job).
There are three profiles provided by CloudLinux:
# tuned-adm list | grep cloudlinux - cloudlinux-default - Default CloudLinux tuned profile - cloudlinux-dummy - Empty CloudLinux tuned profile - cloudlinux-vz - Empty CloudLinux tuned profile
cloudlinux-dummy and cloudlinux-vz are used for internal needs or when Virtuozzo/OpenVZ detected and actually do nothing.
cloudlinux-default is one to be used, it actually does the following:
- Switches CPU power consumption mode to the maximum. CPU operates at maximum performance at the maximum clock rate:
governor=performance energy_perf_bias=performance
Note
If standard software CPU governors are used.
- Applies the following kernel options:
vm.force_scan_thresh=100 - Improves kernel memory clean-up in case of big number of running LVE.
UBC parameters set the limits for the containers:
ubc.dirty_ratio=100 - Defines maximum RAM percentage for dirty memory pages. dirty_background_ratio=75 - Defines RAM percentage when to allow writing dirty pages on the disk.
- [CloudLinux 7 only] Detects used disk types and changes elevator to 'deadline' for HDD and to 'noop' for SSD in /sys/block/[blockname]/queue/scheduler .
Note
The script uses /sys/block/[blockname]/queue/rotational flag, some RAID controllers can not set it properly. For example, SSD used for RAID but rotational is set to 1 by RAID driver. As a workaround add the following to /etc/rc.d/rc.local to make it applied on boot:
echo "noop" > /sys/block/[blockname]/queue/scheduler echo "0" > /sys/block/[blockname]/queue/rotational
Where [blockname] is used device name, like sda/sdb .
And make it executable:
chmod +x /etc/rc.d/rc.local
[CloudLinux 7 only] The profile sets I/O scheduler. For the normal discs the Deadline Scheduler is set to improve IO performance and decrease IO latency, for SSD - noop. When configuring scheduler I/O queue is changed and set to the value 1024 which improves overall I/O subsystem performance by caching IO requests in memory.
Disables transparent HugePage .
Provides adjustment group file for OOM-Killer to kill overrun php, lsphp and Phusion Passenger workers first.
To install:
yum install tuned-profiles-cloudlinux
To start using a profile:
tuned-adm profile cloudlinux-default
To stop using a profile:
tuned-adm off
Kernel config variablesKernel config variables
Starting from lvemanager 4.0-25.5 , lve-utils 3.0-21.2 , and cagefs-6.1-26 , CloudLinux OS utilities can read/write kernel config variables from a custom config /etc/sysctl.d/90-cloudlinux.conf (earlier, the parameters were read/written only from sysctl.conf ).
CloudLinux OS utilities get parameter by using sysctl system utility. So for now, even if a config variable is not set in the sysctl.conf and in the /etc/sysctl.d config files, this variable will be read by sysctl utility directly from /proc/sys .
If some kernel variable was set in /etc/sysctl.d/90-cloudlinux.conf do
sysctl --system.
Starting from lve-utils-3.0-23.7 fs.proc_super_gid and fs.symlinkown_gid will be migrated (one time) from /etc/sysctl.conf into /etc/sysctl.d/90-cloudlinux.conf .
For lve-utils versions from 3.0-21.2 to 3.0-23.7 the migration was performed the same way, but during every package install/update. Variables setting guidelines are the same as for CageFS (see above).
As requested by some our customers, we've implemented a new kernel setting to hide
/proc/net/{tcp,udp,unix} files for additional security/isolation.
You can hide them by runnign
sysctl -w kernel.proc_disable_net=1 command, and by default, it's
0 (nothing hidden).
Virtualized /proc filesystemVirtualized /proc filesystem
You can prevent user from seeing processes of other users (via ps/top command) as well as special files in /proc file system by setting fs.proc_can_see_other_uid sysctl.
To do that, edit /etc/sysctl.conf
fs.proc_can_see_other_uid=0 fs.proc_super_gid=600
# sysctl -p
If fs.proc_can_see_other_uid is set to 0, users will not be able to see special files. If it is set to 1 - user will see other processes IDs in /proc filesystem.
fs.proc_super_gid=XX
The fs.proc_super_gid sets group ID which will see system files in /proc, add any users to that group so they will see all files in /proc. Usually needed by some monitoring users like nagios or zabbix and cldetect utility can configure few most commonly used monitoring software automatically.
Virtualized /proc filesystem will only display following files (as well as directories for PIDs for the user) to unprivileged users:
/proc/cpuinfo /proc/version /proc/stat /proc/uptime /proc/loadavg /proc/filesystems /proc/stat /proc/cmdline /proc/meminfo /proc/mounts /proc/tcp /proc/tcp6 /proc/udp /proc/udp6 /proc/assocs /proc/raw /proc/raw6 /proc/unix /proc/dev
Note
Starting from lve-utils 3.0-21.2, fs.proc_super_gid parameter in da_add_admin utility is written to /etc/sysctl.d/90-cloudlinux.conf.
Remounting procfs with "hidepid" optionRemounting procfs with "hidepid" option
In lve-utils-2.1-3.2 and later
/proc can be remounted with
hidepid=2 option to enable additional protection for procfs. This remount is performed in lve_namespaces service.
This option is in sync with
fs.proc_can_see_other_uid kernel parameter described above.
When
/etc/sysctl.conf does not contain
fs.proc_can_see_other_uid setting, the protection is off (procfs is remounted with
hidepid=0 option). In this case
fs.proc_super_gid setting is ignored. Users are able to see full
/proc including processes of other users on a server. This is a default behavior.
If
/etc/sysctl.conf contains
fs.proc_can_see_other_uid=1 setting, then
/proc will be remounted with
hidepid=0 option (disable
hidepid protection for all users).
If
/etc/sysctl.conf contains
fs.proc_can_see_other_uid=0 setting, then
/proc will be remounted with
hidepid=2 option (enable
hidepid protection for all users).
If
/etc/sysctl.conf contains
fs.proc_can_see_other_uid=0 and
fs.proc_super_gid=$GID settings, then
/proc will be remounted with
hidepid=2, gid=$GID options (enable
hidepid for all users except users in group with gid $GID).
To apply
/etc/sysctl.conf changes, you should execute
service lve_namespaces restart
/usr/share/cloudlinux/remount_proc.py
So, admin can prevent users from seeing processes of other users via
fs.proc_can_see_other_uid and
fs.proc_super_gid settings in
/etc/sysctl.conf, like earlier.
Also, you can override this by specifying desired options for
/proc in
/etc/fstab.
To disable hidepid, add to
/etc/fstab the following:
proc /proc proc defaults,hidepid=0,gid=0 0 0
proc /proc proc defaults,hidepid=2,gid=clsupergid 0 0
mount -o remount /proc
to apply
/etc/fstab changes.
Nevertheless, we recommend to manage procfs mount options via
/etc/sysctl.conf as described above for backward compatibility.
Note
There is a known issue on CloudLinux 6 systems. User cannot see full /proc inside CageFS even when this user is in “super” group, that should see full /proc. This issue does not affect users with CageFS disabled. CloudLinux 7 is not affected.
Note
Starting from lve-utils 3.0-21.2, lve_namespaces service can read parameters from the /etc/sysctl.d/90-cloudlinux.conf.
Note
Even if fs.proc_can_see_other_uid and fs.proc_super_gid parameters are not set in config files but specified in /proc/sys, then when restarting lve_namespaces service the parameters from /proc/sys will be used. So, /proc will be remounted according to these parameters.
Ptrace blockPtrace block
Starting with kernel 3.10.0-427.18.s2.lve1.4.21 ( CloudLinux 7) and 2.6.32-673.26.1.lve1.4.17 ( CloudLinux 6) we re-implemented ptrace block to protect against ptrace family of vulnerabilities. It prevents end user from using any ptrace related functionality, including such commands as strace, lsof or gdb .
By default, CloudLinux doesn't prevent ptrace functionality.
Defaults:
kernel.user_ptrace = 1 kernel.user_ptrace_self = 1
The option kernel.user_ptrace disables PTRACE_ATTACH functionality, option kernel.user_ptrace_self disables PTRACE_TRACEME .
To disable all ptrace functionality change both sysctl options to 0, add this section to /etc/sysctl.conf :
## CL. Disable ptrace for users kernel.user_ptrace = 0 kernel.user_ptrace_self = 0 ##
Apply changes with:
$ sysctl -p
Different software could need different access to ptrace , you may need to change only one option to 0 to make them working. In this case, there will be only partial ptrace protection.
WARNING
ptrace protection is known to break PSA service for Plesk 11
Xen XVDAXen XVDA
2.6.32 kernels have different mode of naming Xen XVDA drives.
By adding xen_blkfront.sda_is_xvda=0 to kernel boot line in grub.conf you will make sure no naming translation is done, and the drives will be identified as xvde .
By default, this option is set to 1 in the kernel, and drives are detected as xvda . This is needed only for CloudLinux 6 and Hybrid kernels.
IO limits latencyIO limits latency
[lve1.2.29+]
When customer reaches IO Limit, the processes that are waiting for IO will be placed to sleep to make sure they don't go over the limit. That could make some processes sleep for a very long time. By defining IO latency, you can make sure that no process sleeps due to IO limit for more then X milliseconds. By doing so, you will also let customers to burst through the limits, and use up more than they were limited too in some instances.
This option is OFF by default.
For CloudLinux 6 and CloudLinux 7 (since Hybrid kernel lve1.4.x.el5h):
To enable IO Limits latency and set it to 10 seconds:
# echo 10000 > /sys/module/kmodlve/parameters/latency
# echo 2000000000 > /sys/module/kmodlve/parameters/latency
It is possible to set, for example, 1000 as a permanent value. To do so, create a file /etc/modprobe.d/kmodlve.conf with the following content:
options kmodlve latency=1000
For CloudLinux 5 (OBSOLETE):
To enable IO Limits latency and set it to 10 seconds:
# echo 10000 > /sys/module/iolimits/**parameters/latency
# echo 2000000000 > /sys/module/iolimits/**parameters/latency
Reading LVE usageReading LVE usage
CloudLinux kernel provides real time usage data in file.
All the statistics can be read from that file in real time. Depending on your kernel version you will get either Version 6 of the file, or version 4 of the file. You can detect the version by reading the first line of the file. It should look like:
6:LVE... for version 6
4:LVE... for version 4
First line presents headers for the data. Second line shows default limits for the server, with all other values being 0. The rest of the lines present limits & usage data on per LVE bases.
Version 6 (CL6 & hybrid kernels):
6:LVE EP lCPU lIO CPU MEM IO lMEM lEP nCPU fMEM fEP lMEMPHY lCPUW lNPROC MEMPHY fMEMPHY NPROC fNPROC 0 0 25 1024 0 0 0 262144 20 1 0 0 262144 100 0 0 0 00 300 0 25 1024 1862407 0 0 262144 20 1 0 0 262144 100 0 31 000
FlashcacheFlashcache
Note
Available only for x86_64, CloudLinux 6 and Hybrid servers .
To install on CloudLinux 6 & Hybrid servers:
$ yum install flashcache
More info on flashcache :
ArchLinux has a good page explaining how to use flashcache :
OOM killer for LVE processesOOM killer for LVE processes
When LVE reaches its memory limit, the processes inside that LVE are killed by OOM Killer and appropriate message is written to /var/log/messages . When any LVE hits huge number of memory limits in short period of time, then OOM Killer could cause system overload. Starting from kernel 2.6.32-673.26.1.lve1.4.15 ( CloudLinux 6) and from kernel 3.10.0-427.18.2.lve1.4.14 ( CloudLinux 7) heavy OOM Killer could be disabled. If so - lightweight SIGKILL will be used instead.
By default OOM Killer is enabled, to disable it please run:
For CloudLinux 6 :
# echo 1 > /proc/sys/ubc/ubc_oom_disable
Also, add the following to /etc/sysctl.conf file to apply the same during boot:
ubc.ubc_oom_disable=1
For CloudLinux 7:
# echo 1 > /proc/sys/kernel/memcg_oom_disable
Also, add the following to /etc/sysctl.conf file to apply the same during boot:
kernel.memcg_oom_disable=1
File system quotasFile system quotas
In Ext4 file system, the process with enabled capability CAP_SYS_RESOURCE is not checked on the quota exceeding by default. It allows userland utilities selectorctl and cagefs to operate without fails even if a user exceeds a quota.
To disable quota checking in XFS file system set cap_res_quota_disable option to 1 using the following command:
# echo 1 > /proc/sys/fs/xfs/cap_res_quota_disable | https://docs.cloudlinux.com/cloudlinux_os_kernel/ | 2019-09-15T10:28:22 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.cloudlinux.com |
pg_resgroup
A newer version of this documentation is available. Click here to view the most up-to-date release of the Greenplum 5.x documentation.
pg_resgroup
The pg_resgroup system catalog table contains information about Greenplum Database resource groups, which are used for managing concurrent statements, CPU, and memory resources. This table, defined in the pg_global tablespace, is globally shared across all databases in the system. | https://gpdb.docs.pivotal.io/510/ref_guide/system_catalogs/pg_resgroup.html | 2019-09-15T10:34:21 | CC-MAIN-2019-39 | 1568514571027.62 | [] | gpdb.docs.pivotal.io |
Frontend Submissions - Vendor Accounts
With Frontend Submissions, to allow vendors to register accounts, enable it from settings. The option to turn on or off vendor registration is under Downloads → Settings → FES → Permissions.
If Vendor registration is open to all then the end user may go to your Vendor Registration Page to sign up. When someone signs up as a Vendor they will get the user role of "Subscriber", unless they already had a WordPress user account with a different user role.
If the end user is NOT signed in as a WordPress user, this is the form they will see.
If the end user IS signed in as a WordPress user, this is the form they will see. As you can see it's much simpler, because it's taking existing information from the WordPress user.
After Registration
After completion of the registration form the new Vendor will see one of two things:
- If you have auto-approval turned on then they are sent directly to the Seller Dashboard where they can get started creating content
- If you have auto-approval turned off then they will see a message that says "Your application is pending". The store administration will receive an email alerting them to the fact that a new Vendor registration requires approval.
Manually Grant Vendor Status
To manually make any WordPress user into an FES Vendor browse to their WordPress profile page. Find the section titled Easy Digital Downloads Frontend Submissions. In that section is a link titled Make Vendor. Simply clicking that link will make that user a Vendor.
This same area can be used to see that a user is a Pending Vendor, and link to the Vendor Profile. In the Vendor Profile a user can be Approved as a Vendor.
Vendor Management
You may manage Vendors on the EDD FES → Vendors page. Note, this title may change if you've changed the Vendors constant to something else.
Here you'll see a list of all Vendors. Below is a Vendor that has applied, but is still Pending, as well as an approved vendor. The three icons under the name allow you to View, Approve, or Reject the application.
If you click the View icon then you'll be taken to the Vendor Information page. The Vendor information area uses a tabbed interface, with the tabs along the right side.
Vendor Profile
The first tab is the Vendor Profile and it will look something like this:
The Vendor Profile tab contains:
- Vendor name
- Vendor email
- Vendor sign-up date
- User ID
- User status
- Vendor mailing address
- Number of sales
- Value of sales
- A list of products
- A button to Revoke this Vendor (which will also delete all products)
- A button to Suspend this Vendor
- A link to edit the information on this panel
Vendor Notes
The Vendor Notes tab is designed for site administrators to make private notes about Vendors. These notes are visible ONLY to site administrators. Below is an example with one note.
Vendor Registration
This tab allows site administrators to change the Vendors
- Display Name
Vendor Profile
This tab sets the name of the Vendor's store as well as allowing for a custom email address for the Vendor contact form. If no email address is provided then the address from the WordPress user profile will be used.
Vendor Products
This tab lists all of the Vendor's products, showing ID, Title, Status, and number of sales. Titles are links to the product admin pages. There's also a sum of the number of sales.
Vendor Reports
This tab can show both earnings and sales over time. It can be filtered to show
- Today
- Yesterday
- Last Week
- Last Month
- This Quarter
- Last Quarter
- This Year
- Last Year
- Custom
Vendor Exports
This tab allows the site administrator to download a PDF with Sales and Earnings for the current year as well as a Customer list for any given product or all products. The customer list could be emails only or emails and names. The customer list is delivered in CSV format.
Limiting the number of products a vendor can publish
At this time, Frontend Submissions doesn't support this out of the box. However, you can do this using Restrict Content Pro and the free EDD FES Vendor Limits add-on for Restrict Content Pro. | https://docs.easydigitaldownloads.com/article/952-frontend-submissions-vendor-accounts | 2019-09-15T09:38:28 | CC-MAIN-2019-39 | 1568514571027.62 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/56e3476c90336026d87177cf/file-HZfy9H9c0y.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/559fbefbe4b03e788eda21c5/file-G2YfIaWfUy.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/559fbee5e4b03e788eda21c3/file-4QpM8pey2S.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/562986d1903360610fc6acf0/file-fh2009xb7q.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/562a2706c69791452ed4e499/file-QjWaRZF5os.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/56290a21c69791452ed4e0c3/file-4ygZwWFgoa.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/56295505c69791452ed4e284/file-i0GaBOcHIj.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/562959ca903360610fc6ac1a/file-40cD2c6RTo.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/56295a5a903360610fc6ac20/file-scs9JwCumU.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/56295bf2903360610fc6ac2e/file-JYl18QwGWe.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/56295d64903360610fc6ac34/file-fpwmghY6m4.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/56297e0fc69791452ed4e347/file-gnP8nh3L66.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/56297f76903360610fc6ace6/file-29o65uopte.png',
None], dtype=object) ] | docs.easydigitaldownloads.com |
Range.ClearHyperlinks method (Excel)
Removes all hyperlinks from the specified range.
Syntax
expression.ClearHyperlinks
expression A variable that returns a Range object.
Return value
Nothing
Remarks
Calling the ClearHyperlinks method on the specified range is equivalent to using the Clear Hyperlinks command from the Clear drop-down list in the Editing section of the Home tab. Only hyperlinks will be removed; all other cell content, such as text and formatting, will be unaffected.
Support and feedback
Have questions or feedback about Office VBA or this documentation? Please see Office VBA support and feedback for guidance about the ways you can receive support and provide feedback. | https://docs.microsoft.com/en-us/office/vba/api/excel.range.clearhyperlinks | 2019-09-15T10:43:02 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.microsoft.com |
Set Up Validation of Purchase Amounts
In Business Central, you can activate the Check Doc. Total Amounts function to validate the total amount of purchase documents before posting a purchase invoice and purchase credit memo. By default, the purchase document total amount is validated when you post. The total amount of the inserted purchase lines must be equal to the amount including VAT and the VAT amount. To validate the purchase document amount automatically, you must enter the document amount including VAT and the document amount VAT in the Purchase Invoice or Purchase Credit Memo page.
If you have only one purchase line or several purchase lines with the same VAT percentage, the correct document amount VAT is calculated automatically when you insert the purchase lines and the document amount including VAT. If you have several purchase lines with different VAT percentages, the document amount VAT value must be changed manually.
You can also locate when the document total amounts and the total amounts of the inserted purchase lines are different. You can activate the Show Totals on Purch. Inv./CM. option to view the following in the inserted purchase lines:
- Total amount
- Total base amount
- Total VAT amount
- Total amount including VAT
The calculated amounts are displayed in the purchase invoice or purchase credit memo. By default, this total amount is not displayed.
You can activate this option only if the purchase invoice or purchase credit memo has:
- A minimum of one purchase line.
- The quantity field specified.
To set up validation of total amounts for purchase documents
Choose the
icon, enter Purchases & Payables Setup, and then choose the related link.
On the General FastTab, fill in the fields as described in the following table.
Choose the OK button.
See Also
Netherlands Local Functionality
Setting Up Purchases
Commentaires
Chargement du commentaire... | https://docs.microsoft.com/fr-fr/dynamics365/business-central/localfunctionality/netherlands/how-to-set-up-validation-of-purchase-amounts | 2019-09-15T10:38:28 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.microsoft.com |
Uploading Scene Changes from Harmony to WebCC
Once you are done working on your local copy of a scene you exported from WebCC, you can upload the changes you made directly from Harmony to the original version of the scene on the WebCC server.
NOTEUploading changes to your scene to WebCC will cause the thumbnail and preview movie to be automatically regenerated in WebCC. It may however take a few minutes for the thumbnail and preview movie to update.
- In the top menu, select File > Download Database Changes.
If you have not yet done so during this Harmony session, you will be prompted to enter your WebCC username and password:
Click on the OK button.
Harmony will upload the changes you made to the scene to the WebCC server. The version of the scene on the database will correspond to your local copy. | https://docs.toonboom.com/help/harmony-15/advanced/server/webcc/upload-changes-to-webcc.html | 2018-10-15T13:30:25 | CC-MAIN-2018-43 | 1539583509196.33 | [array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.toonboom.com |
Starting the GPI Data Pipeline¶
The pipeline software is designed to run in two different IDL sessions:
- one for the data processing,
- and one for the graphical interfaces.
Splitting these tasks between two processes enables the GUIs to remain responsive even while long computations are running.
Exactly how you start up those two IDL sessions varies with operating system, and with whether you have installed from source or compiled code.
Starting from source code (either from the repository or zip files)¶
Starting the pipeline manually
On any OS you can simply start up the pipeline manually.
Start an IDL session. Run
IDL> gpi_launch_pipeline
Start a second IDL session. Run
IDL> gpi_launch_guis
If in the first IDL session you see a line reading “Now polling for data in such-and-such directory”, and the Status Console and Launcher windows are displayed as shown below, then the pipeline has launched successfully.
Mac OS and Linux startup script
On Linux or Mac, a convenient shell script is provided in
pipeline/scripts that starts 2 xterms, each with an IDL session, and runs the above two commands. This script is called
gpi-pipeline:
shell> gpi-pipeline
You should see two xterms appear, both launch IDL sessions, and various commands run and status messages display.
If in the second xterm you see a line reading “Now polling for data in such-and-such directory”, and the Status Console and Launcher windows are displayed as shown below, then the pipeline has launched successfully.
Warning
In order for the
gpi-pipeline script to work, your system must be set up such that IDL can be launched from the command line by running
idl. The script will not execute correctly if you use an alias to start IDL rather than having the IDL executable in your path. In this case you will probably get an error in the xterms along the lines lines of: ‘xterm: Can’t execvp idl: No such file or directory’. To check on how you start IDL, run:
shell> which idl
A blank output (or an output that says ‘aliased’) means that idl is not in your path. To add it, either edit your
$PATH variable, or go to a user-writeable directory in your path (you can check which directories are in your path by running
echo $PATH). Then create a symbolic link in the directory by running:
shell> ln -s /path/to/idl idl
If you encounter problems with the startup script, just start the IDL sessions manually as described above.
Windows startup script
On Windows, there is a batch script in the
pipeline/scripts directory called
gpi-pipeline-windows.bat. Double click it to start the GPI pipeline.
If in the first IDL session you see a line reading “Now polling for data in such-and-such directory”, and the Status Console and Launcher windows are displayed as shown below, then the pipeline has launched successfully.
For convenience, you can create a shortcut of
gpi-pipeline-windows.bat by right clicking on the file and selecting the option to create a shortcut. You can then place this on your desktop, start menu, or start screen to launch the pipeline from where it is convenient for you.
If you encounter problems with the startup script, just start the IDL sessions manually as described above.
Starting compiled code with the IDL Virtual Machine¶
The compiled binary versions of DRP applications that can be started with the IDL Virtual Machine are:
gpi_launch_pipeline.savstarts the pipeline controller and the status console
gpi_launch_guis.savstarts the Launcher and other GUIs.
These files are located in the
executables subdirectory of the distributed zip files.
How to run a .sav file in the IDL Virtual Machine depends on your operating system. Please see Exelis’ page on Starting a Virtual Machine Application for more details.
Mac OS and Linux manual startup of the Virtual Machine
Mac and Linux users can launch the IDL virtual machine and then tell it to launch a particular .sav file. You’ll need to repeat this for the two GPI pipeline IDL sessions.
The following commands assume that the environment variables
$IDL_DIR and
$GPI_DRP_DIR have been set, either by the
gpi-setup-nix script or manually:
Enter the following at the command line to start an IDL session for the pipeline backbone:
unix% $IDL_DIR/bin/idl -rt=$GPI_DRP_DIR/executables/gpi_launch_pipeline.sav
The IDL Virtual Machine logo window will be displayed with a “Click to continue” message. Click anywhere in the IDL logo window to continue and run the .sav file.
Repeat the above process to start a second IDL session for the pipeline GUIs:
unix% $IDL_DIR/bin/idl -rt=$GPI_DRP_DIR/executables/gpi_launch_guis.sav
You may also launch the IDL Virtual Machine and use its file selection menu to locate the .sav file to run.
Enter the following at the UNIX command line:
>>> $IDL_DIR/bin/idl -vm
The IDL Virtual Machine logo will be displayed. Click anywhere in the IDL Virtual Machine window to display a file selection dialog box.
Locate and select the desired .sav file and click OK to open that file.
If in the first IDL session you see a line reading “Now polling for data in such-and-such directory”, and the Status Console and Launcher windows are displayed as shown below, then the pipeline has launched successfully.
Mac OS and Linux startup script
Just like for the source code install, a script is provided in
pipeline/scripts that launches 2 IDL sessions, and starts the pipeline code.
While the under the hood implementation is slightly different, the script name and effective functionality are identical.
shell> gpi-pipeline
If you encounter problems with the startup script, just start the IDL sessions manually as described above.
Warning
On Mac OS, in theory it ought to be possible to start the pipeline by double clicking the .sav files or .app bundles produced by the IDL compiler. However, if you start them from the Finder, then they will not have access to any environment variables that define paths, since those are set in your shell configuration files, which the Finder knows nothing about.
We recommend you start the IDL virtual machine settings from inside Terminal or an xterm, as described above.
If you really do want to start from double clicking in the Finder, you will need to define all the pipeline file paths using your
.gpi_pipeline_settings file instead of via environment variables. See Configuring the Pipeline.
Windows manual startup of the Virtual Machine
Most simply, if your installation of Windows has file extensions configured to associate .sav files with IDL, you can just double click.
To open a .sav file from the IDL Virtual Machine icon:
- Launch the IDL Virtual Machine in the usual manner for Windows programs, either by selecting the IDL Virtual Machine from your Start Menu, or double clicking a desktop icon for the IDL Virtual Machine.
- Click anywhere in the IDL Virtual Machine window to display the file selection menu.
- Locate and select the .sav file, and double-click or click Open to run it.
To run a .sav file from the command line prompt:
Open a command line prompt. Select Run from the Start menu, and enter cmd.
Change directory (cd) to the
IDL_DIR\bin\bin.platformdirectory, where platform is the platform-specific bin directory.
Enter the following at the command line prompt:
>>> idlrt -vm=<path><filename>
where
<path>is the path to the .sav file, and
<filename>is the name of the .sav file.
Pipeline IDL Session¶
The IDL session running the pipeline should immediately begin to look for new recipes in the queue directory. A status window will be displayed on screen (see below). On startup, the pipeline will display status text that looks like:
% Compiled module: [Lots of startup messages] [...] 01:26:22.484 Now polling and waiting for Recipe files in /Users/mperrin/data/GPI/queue/ ***************************************************** * * * GPI DATA REDUCTION PIPELINE * * * * VERSION 1.0 * * * * By the GPI Data Analysis Team * * * * Perrin, Maire, Ingraham, Savransky, Doyon, * * Marois, Chilcote, Draper, Fitzgerald, Greenbaum * * Konopacky, Marchis, Millar-Blanchaer, Pueyo, * * Ruffio, Sadakuni, Wang, Wolff, & Wiktorowicz * * * * For documentation & full credits, see * * * * * ***************************************************** Now polling for Recipe files in /Users/mperrin/data/GPI/queue/ at 1 Hz
If you see the “Now polling” line at the bottom, then the pipeline has launched successfully.
The pipeline will create a status display console window (see screen shot below). This window provides the user with progress bar indicators for ongoing actions, a summary of the most recently completed recipes, and a view of log messages. It also has a button for exiting the DRP (though you can always just control-C or quit the IDL window too). This is currently the only one of the graphical tools that runs in the same IDL session as the main reduction process.
Above: Snapshot of the administration console.
GUI IDL Session¶. | http://docs.planetimager.org/pipeline/usage/starting.html | 2018-10-15T14:00:10 | CC-MAIN-2018-43 | 1539583509196.33 | [array(['../_images/GPI-DRP-Status-Console1.png',
'../_images/GPI-DRP-Status-Console1.png'], dtype=object)
array(['../_images/GPI-launcher2.png', '../_images/GPI-launcher2.png'],
dtype=object) ] | docs.planetimager.org |
Connect administration Administrators can configure various performance settings and features that impact both Connect Chat and Connect Support. Note: There are also administrative options specifically for Connect Chat or Connect Support. For more information, see Connect Chat administration and Connect Support administration. Configure the polling intervalThe polling interval determines how frequently the system polls for new Connect messages.Disable the Connect overlayThe Connect overlay is enabled by default and is integrated with the standard user interface. You can disable the Connect overlay.Administer Connect actionsYou can create or modify Connect actions to provide custom functionality in Connect Chat or Connect Support conversations. | https://docs.servicenow.com/bundle/geneva-servicenow-platform/page/use/collaboration/concept/c_ConnectAdministration.html | 2018-02-18T02:53:35 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.servicenow.com |
Software Architecture Overview
In order to offer S.O.N.I.A. Software more modularity and maintainability, we decided to construct our architecture with layers. All layers should be totaly independent of the one above in order to minimize the impact of a modification on a packet API.
As you can see on the design scheme, you will find three different layers wich are:
- The Providers
- The Stacks
- The Controllers
There is also two categories of packages that are used by all the packages:
- The Libraries
- The GUI Softwares
As the libraries are used by all the software, this is a critical component of our system.
Providers
The provider layer contains all packages that allows us to have the raw data dans basic access on our devices. Theses packages aims to be general and independent of the model of the device that is being used.
ROS is great for this king of application, we provide services and messages on every provider for accessing the features and datas of the devices.
The design also aims to be as flexible as possible and changing the model of a specific provider should be a matter of adding a driver class.
The providers does not have any dependencies on the rest of the system except for the libraries.
Stacks
The stacks are the main part of our software. These are the packages that contains our intelligence, all our processing, error correction and other algorithms.
The stacks are using the providers topics and services in order to get there feature and data. There is also few coupling on the stacks providers. For example, the mapping stack depend on the navigation stack, and reciprocally.
The result of the stacks are metadata that consider the environment of the submarine. The algorithms are combining all the raw data with the context of the submarine to provide consistent and accurate metadata, such as the location and direction of the submarine, the position of a specific object, the result of image processing on the cameras images, etc.
As the stack packages are usually substential packages, we insist on the design and conception of these packages. Therefore, you will be able to find a complete documentation of the stack pages along side of this documentation.
Controllers
Libraries
S.O.N.I.A. Software uses multiples librairies.
- Armadillo is used for linear algebra
- Mlpack (with Armadillo) for algorithm. See Proc Mapping | http://sonia-auv.readthedocs.io/software/overview/ | 2018-02-18T03:23:19 | CC-MAIN-2018-09 | 1518891811352.60 | [array(['../../assets/img/software_auv7_design.jpeg', 'AUV7 Design'],
dtype=object) ] | sonia-auv.readthedocs.io |
The Nylas APIs
The Nylas Platform provides a modern API that works with existing email providers. It makes it simple and fast to integrate your app with a user's email, contacts, or calendar data, and eliminates the complexity of working with old protocols like IMAP and MIME.
The API is designed around the REST ideology, providing simple and predictable URIs to access and modify objects. Requests support standard HTTP methods like GET, PUT, POST, and DELETE and standard status codes. Response bodies are always UTF-8 encoded JSON objects, unless explicitly documented otherwise.
Looking to sync and process mail on-the-fly for many accounts? Check out the Nylas Sync Strategies article in our Knowledge Base for our recommended approach.
v2.0 of the API
These docs pertain to version 2.0 of the API. Check out the API versioning section to learn more about how to switch between versions. You can find docs for version 1.0 here.
There are two ways you can authenticate users to your application. If you don't want to build the frontend for authentication yourself, you can use our Hosted Authentication flow to get started quicker. Keep in mind this may be faster initially, but you'll have less control over the authentication user experience. You do have the ability to update your company logo and name, white labeling the experience for your users.
The other option provides much more control and gives you the ability to natively build an authentication flow into your application. With Native Authentication, a user never has to leave your application to connect their account and you have the freedom to design what the entire process looks like.
Visual Learner?
Check out our guide on Hosted vs Native authentication which includes videos showing what each flow might look like to your end users.
Once you've successfully connected an email account using one of the authentication methods outlined above, you'll have an
access_token that allows you to pull email, contact, and calendar data for that email account. To learn more about how to use this
access_token to make requests to the Nylas API see this guide.
Nylas access_tokens don't expire automatically!
Regardless if you use Hosted Authentication or Native Authentication to retrieve an account's
access_token, it's very important to keep in mind that nylas
access_tokens never expire. To revoke an access token, you'll need to explicitly call the /oauth/revoke endpoint. This means if you re-authenticate an account, it's possible you can have two active
access_tokens. It's your application's responsibility to keep track of these access tokens and revoke them when appropriate.
You can add up to 10 accounts to free to test your integration. If you'd like to add more accounts, you'll need to add your credit card information in the Nylas Dashboard.
Create your developer account
Before you can interact with the Nylas API, you need to sign up for a developer account, which will generate an API
client_id and
client_secret for you.
Hosted Authentication
The Nylas platform uses the OAuth 2.0 protocol for simple, effective authorization. Before making API calls on behalf of a user, you need to fetch an
access_token that grants you access to their email. Once you've obtained a token, you include it with Nylas API requests as the HTTP Basic Auth Username. Although you'll immediately have access to the API once you authorize an account, it may take some time until all historical messages are synced.
Nylas supports both two-legged and three-legged OAuth. It's important to identify which flow you should use:
You should use Server-side (explicit, three-legged) OAuth if you'll be using the Nylas API from:
- backend web services (Ruby on Rails, PHP, Python, etc.)
- a service that will store a large number of access tokens
- a service running on Amazon EC2 or another cloud platform
You should use Client-side (implicit, two-legged) OAuth if you'll be using the Nylas API from:
- a native app on desktop or on mobile
- a client-side Javascript application
- any other app that does not have a server component
Server Side (Explicit) Flow
Step 1
From your application, redirect users to, with the parameters detailed in /oauth/authorize. Note for this server side flow
response_type should be set to
code.
Step 2
Nylas will present your user with the correct sign in panel based on their email address. For example, a user with a Gmail address will see the Gmail “Authorize this Application” screen, while a user with a Yahoo address is shown a Yahoo sign in panel.
If (and only if) Nylas cannot auto detect the user's email provider from their address, the user will see a provider selection screen first.
For Exchange users, clicking "Advanced Settings" will enable the user to enter a login name and/or Exchange server. The majority of Exchange users can log on with their email address and auto-detected server details, but some will have to enter this additional information.
Step 3
Once the user has signed in, their browser will be redirected to the
redirect_uri you provided. If authentication was successful, Nylas will include a
code parameter in the query string.
Make an HTTP POST to to exchange the
code for an
access_token. See /oauth/token for details. Make sure to securely store the
access_token and provide it as the HTTP Basic Auth Username to make API calls on behalf of the user.
Client Side (Implicit) Flow
Step 1
From your application, redirect users to, with the parameters detailed in /oauth/authorize. Note for this server side flow
response_type should be set to
token.
Step 2
Nylas will present your user with the correct sign in flow based on their email address. This is the exact same as step 2 in the server side flow.
Step 3
Once the user has signed in, their browser will be redirected to the
redirect_uri you provided. If authentication was successful, Nylas will include a
token parameter in the query string. That's it! We recommend storing the
access_token and then removing it from the URL fragment with JavaScript. This is the token you will provide as a HTTP Basic Auth Username to make API calls on behalf of the user.
Note:
If you're building a mobile app or desktop application, you may want to use a custom URL scheme to listen for the redirect to happen in the user's web browser. For example,
myapp://app/auth-response.
/oauth/revoke
You can easily revoke an access token by issuing a POST request to the following endpoint. Include the to-be-revoked access token as the HTTP Basic Auth username.
A 200 status code response with an empty body signifies that the token has been successfully revoked and can no longer be used.
Re-authentication
When a user's credentials or server settings (for IMAP/ SMTP or Exchange) changes, the account will stop syncing and its status will change to 'invalid-credentials'. For the account sync to resume, you must ask the user to go through the authentication flow again with their new credentials or server settings.
For a credentials change, once the user has re-authenticated, account sync simply resumes.
Important note!
For a server settings change, for example if the user changes the IMAP/ SMTP or Exchange server endpoint, all previously synced Nylas API object ids for the account will be invalidated. The user will be associated with a new account and
account_id, and the Nylas API token returned from reauthentication will point to this account.
Your application must detect this new
account_id and take appropriate measures to invalidate any existing IDs you have cached.
Native Authentication
This is a set of endpoints for programmatically creating and updating accounts on Nylas Cloud. It allows you to build a signup form for a user to connect their mailbox to your application.
Connecting a new account is a two-step process, and follows semantics similar to OAuth. The first step is verifying credentials from a user, and the second step is associating this new account with your application in order to receive an API access token.
There are two main endpoints: for authenticating a mailbox for connecting the mailbox to your Nylas Cloud app
All API requests must be made over SSL (HTTPS). The Nylas API is not available via unencrypted HTTP.
/connect/token
This endpoint is where your application exchanges the code received from
/connect/authorize and receives an
access_token. This associates the mailbox with your Nylas Cloud app.
A successful response from this will be an account object with an
access_token attribute. Once you’ve obtained a token, you include it with Nylas API requests as the
HTTP Basic Auth Username.
You can remove this account from your Nylas Cloud app in the Nylas API console.
Never send your client secret to a browser!
This request should be made from your server. It's important that you never send your client secret to a browser. In order to do this, your browser JS code should securely send the received code in the previous step to your web app, which in turn makes the request to
/connect/token.
Accounts
An account corresponds to an email address, mailbox, and optionally a calendar. When connecting to the Nylas API, a specific access token gives you access to a specific account’s data.
The Account object
Responses for the Account object are encoded as UTF-8 JSON objects with the following attributes:
Folders vs. Labels
Messages and threads in accounts can either be organized around folders, or around labels. This depends on the backend provider. For example, Gmail uses labels whereas Yahoo! Mail uses folders.
Labels and Folders have fundamentally different semantics and these are preserved in the API. The core difference is that a message can have more than one label, versus only having one folder.
The
organization_unit attribute on the account object indicates whether the account supports folders or labels. Possible values are
folder or
label. For more details on format and usage, see the related documentation on Folders and Labels.
/account
An access token can be used to request information about the account it corresponds to. This includes information such as the end-user’s name, email address, and backend mailbox provider.
Single Account Sync States
The
sync_state field for an account can take one of the following statuses:
Note: The sync_state for this individual account endpoint is a simplified set of values relative to the sync_state that is available from the Account Management API.
Account Management
These endpoints allow for account management outside the developer console interface. You can list, cancel, reactivate, and delete accounts associated with your application.
Special Authentication Needed
This endpoint uses the management API domain with different authentication from the rest of the Nylas API.
Listing all accounts
Often you may want to retrieve a list of all users who have connected to your application. You can use the
/accounts endpoint within your application namespace. This will list the accounts associated with your Nylas developer application.
Default Limit
Note that the default limit is set to 100 accounts in the response object. Please see our pagination section to modify that limit.
Instead of using a connected email account’s token, you use your
client_secret for the Basic Auth username which can be found in the developer console.
Responses are encoded as UTF-8 JSON objects with the following attributes.
Re-activate an account
You can re-enable cancelled accounts to make them active again using this endpoint.
Deleting an account
If you would like for an account's data to be completely removed from Nylas' servers, follow these steps to queue the account for deletion:
Please note once you follow these steps, the account will be queued for deletion. It can take up to 30 days for an account's data to be completely removed. Furthermore, if the user re-connects their account and authenticates again, their account won't be deleted.
Account Management Sync States
Sometimes, syncing cannot complete successfully for an account, or it might get interrupted. In the event of a recoverable failure, you will see a notification in the Nylas dashboard and one of the following status messages in the Accounts API:
Most sync and connection failures are temporary, but if this persists, check our status page for any known problems and/or contact support.
Credential errors
These occur when the user's account fails authorization. Usually this is because the user has changed their password, revoked their OAuth credentials, or their IMAP/ SMTP or Exchange server endpoint has changed.
Without authorization, no mail operations can successfully complete. In order to make the account active again, you will need to re-authorize the user by asking them to reauthenticate by going through the authentication flow again.
Configuration Problems
Sometimes you might run into trouble when trying to sync accounts. Here is a list of common problems you might run into.
'All Mail' folder disabled
For Gmail and Google Apps accounts, the Nylas Sync Engine synchronizes the 'All Mail' folder. If a user has disabled IMAP access for this folder, synchronization will fail.
To fix it, the user needs to make sure the 'All Mail' folder has 'Show in IMAP' checked in their Gmail settings. After enabling it, re-authorize the user by restarting the authorization flow.
Full IMAP not enabled
As the Nylas Sync Engine synchronizes mail over IMAP, if IMAP access is not properly enabled for an account or a domain, synchronization will fail. This does not apply to Microsoft Exchange accounts.
The user needs to ensure IMAP is fully enabled for their account. This may involve contacting their domain administrator or hosting provider. Once it is enabled, re-authorize the user.
Connection and sync errors
If temporary connection issues persist, contact support for assistance. Outages or other unscheduled service interruptions are posted on Nylas Status.
Too many connections
Some IMAP configurations limit the number of connections which can be made. If a user has several programs accessing their email account via IMAP, they may run into this error with the Nylas Sync Engine. The resolution is for the user to close other programs which may be accessing their account via IMAP. Gmail users can check which applications they have authorized, and remove any that are no longer being used.
Threads are a first-class object, allowing you to build beautiful mail applications that behave the way users have come to expect. Actions like archiving or deleting can be performed on threads or individual messages.
Nylas threads messages together using a variety of heuristics. On Gmail and Microsoft Exchange accounts, messages will be threaded together as close as possible to the representation in those environments. For all other providers (including generic IMAP), messages are threaded using a custom JWZ-inspired algorithm. (Open source here, for the curious.)
To load all messages for a given thread, you should instead use the messages endpoint with a
thread_id filter parameter.
The Thread Object
Responses from the
/threads endpoint are encoded as UTF-8 JSON objects with the following attributes:
Supported Modifications
You can make many modifications to the state of threads:
- Modify the unread status
- Star or unstar the thread
- Move the thread to a different folder
- Modify the thread's labels
To make these modifications, make an HTTP PUT request to
/threads/{id} with any combination of the body parameters specified here.
A note about thread modifications
An operation on a thread is performed on all the messages in the thread. It's a convenient shortcut to perform bulk operations on messages, which is what users have come to expect with modern mail applications.
Filtering, Pagination, and Views with Threads
The threads endpoint supports Filters, Pagination, and Views, making it easy to return a subset of threads in specific folder, from a certain address, with a specific subject, etc.
Thread Filtering
Threads support various combinations of Filters. Check all the query parameters on the /threads endpoint for more information.
Thread Pagination
By default the
/threads endpoint will return a maximum of 100 objects. You should paginate through an entire user's mailbox by using the
limit and
offset URL query parameters. See Pagination for more details about pagination in general. Check all the query parameters on the /threads endpoint for more information.
Thread Views
Threads support the use of Views by including the
view query parameter in your request.
The Expanded Threads View expands the threads response to contain message and draft sub-objects. Adding
view=expanded will remove
message_ids and
draft_ids, and include
messages and
drafts. Note the message and draft sub-objects do not include a
body parameter.
Unread status
The
unread attribute is set to
true if any of the thread's messages are unread. To mark all underlying messages as "read", your application should change the
unread attribute to
false on the thread. Any change to a thread's
unread status will cascade to all messages in the thread.
Changes to the unread status. Changing the starred property of a thread will cause all messages in that thread to be starred or unstarred.
The
starred property in the Nylas API is equivalent to stars in Gmail, the IMAP flagged message attribute and Microsoft Exchange message "flags."
Changes to the starred value will propagate to the backend mailbox provider, such as Gmail or Exchange.
Moving a thread
Note about thread folders
The
folders attribute of a thread contains an array of folder objects. This is the union of all folders containing messages in the thread. For example, an ongoing discussion thread would likely have messages in the Inbox, Sent Mail, and perhaps the Archive folders.
Your application can move a thread to a new folder by specifying a
folder_id, and this will perform the operation on all messages within that thread. Using the Inbox folder ID will move all messages of the thread to the Inbox.
Note that messages in the sent folder are not moved via this batch action.
Modifying labels
Note about labels
Labels are a Gmail-specific way of organizing messages. A message or thread can have multiple labels, enabling more complex queries and email management workflows. The
labels attribute of a thread contains an array of [[label]] objects. This is the union of all labels on messages in the thread.
The Nylas platform lets you easily change the labels associated with a message. This change will propagate to the backend mailbox provider (Gmail). files (attachments), calendar event invitations, and more.
Security notice about message bodies
Although message bodies are HTML, they are generally not safe to directly inject into a web app. This could result in global styles being applied to your app, or the execution of arbitrary JavaScript.
The Message Object
Responses from the
/messages endpoint are encoded as UTF-8 JSON objects with the following attributes:
Supported Modifications
Like with Threads, you can make many modifications to the state of messages. You can:
- Modify the unread status
- Star or unstar the message
- Move the message to a different folder
- Modify the message's labels
To make these modifications, make an HTTP PUT request to
/messages/{id} with any combination of the body parameters specified here.
The Nylas APIs expose a parsed and sanitized version of the original RFC-2822 email object, combined with state from the mail server, such as unread status and folder location. This results in a simple and universal object type that makes building applications a breeze.
We still provide access to the RFC-2822 raw message object if you want it.
Filtering, Pagination, and Views with Messages
The messages endpoint supports Filters, Pagination, and Views, making it easy to return a subset of messages in specific folder, from a certain address, with a specific subject, etc.
Message Filtering
Messages support various combinations of Filters. Check all the query parameters on the /messages endpoint for more information.
Message Pagination
By default the
/messages endpoint will return a maximum of 100 objects. You should paginate through an entire user's mailbox by using the
limit and
offset URL query parameters. See Pagination for more details about pagination in general. Check all the query parameters on the /messages endpoint for more information.
Message Views
Messages support the use of Views by including the
view query parameter in your request.
The expanded message view exposes several additional RFC2822 headers, useful for implementing custom threading or cross-mailbox identification. Pass the
view=expanded query parameter when making requests to
/messages and
/messages/{id}
The following block is added to the message object when using the expanded view.
{ "headers": { "In-Reply-To": "<[email protected]>", "Message-Id": "<[email protected]>", "References": ["<[email protected]>"], } }
A note about expanded message view
Note that these values are unrelated to Nylas object IDs. Because they are provided by clients without validation, there is no guarantee of their accuracy, uniqueness, or consistency.
Unread status
In most systems, incoming mail is given an "unread" status to help the user triage messages. When viewing a message, it's customary for a mail app to automatically modify this "unread" attribute and remove the notification or highlight.
However, unlike its literal meaning, the unread value is mutable, and so it's possible to manually change a message back to "unread." Users will often do this as a reminder to follow up or triage a message later.
The Nylas platform lets you easily change the unread property of a message. This change. (e.g. If one message in a thread is starred, the entire thread is shown as starred.)
The starred property in the Nylas API is equivalent to stars in Gmail, the IMAP flagged message attribute and Microsoft Exchange message "flags."
They Nylas platform lets you easily change the starred property of a message. This change will propagate to the backend mailbox provider, such as Gmail or Exchange.
Moving a message
Note about moving messages
Folders are a common way to organize email messages. A mailbox usually has a set of standard folders like Inbox, Sent Mail, and Trash, along with any number of user-created folders. For more information about types and capabilities, see the [[Folders]] section.
Nylas supports moving messages between folders with a single, simple API call. Note that messages can only exist within one folder.
Modifying labels
Note about modifying message labels
Labels are a Gmail-specific way of organizing messages. A message or thread can have multiple labels, enabling more complex queries and email management workflows.
The Nylas platform lets you easily change the labels associated with a message. This change will propagate to the backend mailbox provider (Gmail).
Raw message contents
Access to the RFC-2822 message object
If you need to access some specific email headers or data that is not exposed by the standard message API, you can request the original raw message data originally downloaded from the mail server. Setting the
Accept header to
message/rfc822 will return the entire raw message object in RFC 2822 format, including all MIME body subtypes and attachments.
Folders behave like normal IMAP or filesystem folders. A Message can only exist within one folder at a time, but a Thread with many messages may span several folders.
Folders are only supported on accounts for which
organization_unit is
folder. You can check if an account supports labels by the
organization_unit property on the Account object.
Folders support basic CRUD operations outlined in the endpoints below.
Using Filters with Folders
The endpoints for Messages, Threads, and Files support Filters for folders using the
in query parameter. Simply pass a
folder_id,
name, or
display_name value when calling those endpoints. This conveniently has the same format for both labels and folders, so you application can specify a single filtering request independent of organization unit. See the Filters documentation for more details.
Nested Folders
IMAP has very limited support for nested folders: it encodes a folder's path in its name. For example, the folder
Accounting/Taxes will actually be named
Accounting.Taxes or even
INBOX.Accounting.Taxes depending on your IMAP server. To complicate things, different IMAP servers use different path separators (for example,
Taxes.Accounting` on server A will beTaxes\Accounting` on server B).
The Nylas API handles nested IMAP folders transparently. Creating a
Taxes/Invoices folder using the API will create a folder with the right path separators (i.e: depending on your server,
INBOX.Taxes.Invoices or
Taxes/Invoices).
Responses are encoded as UTF-8 JSON objects with the following attributes:
This endpoint will return a new folder object upon success. An error will be returned if the supplied
display_name is too long, or a conflicting folder already exists.
The
display_name attribute of a folder can modified, and these changes will propagate back to the account provider. Note that the core folders such as INBOX, Trash, etc. often cannot be renamed.
A successful request will return the full updated folder object.
Note about deleting folders
Folders must be emptied before being deleted to prevent the accidental deletion of threads. If the requested folder is not empty, the server will respond with a Forbidden error.
Labels are equivalent to Gmail labels. Messages can have more than one label, which is popular for users who set up mail filters.
Labels are only supported on accounts for which
organization_unit is
label. You can check if an account supports labels by the
organization_unit property on the Account object.
Labels support basic CRUD operations outlined in the endpoints below.
Using Filters with Labels
The endpoints for Messages, Threads, and Files support Filters for labels using the
in query parameter. Simply pass a label
id,
name, or
display_name value when calling those endpoints. This conveniently has the same format for both labels and folders, so you application can specify a single filtering request independent of organization unit. See the Filters documentation for more details.
Responses are encoded as UTF-8 JSON objects with the following attributes:
You can easily create new labels on accounts where the
organization_unit is set to
label. Generally this is only Gmail/Google Apps accounts.
This endpoint will return a new label object upon success. An error will be returned if the supplied
display_name is too long, or a conflicting label already exists.
The display_name attribute of a label can modified, and these changes will propagate back to the account provider. Note that the core labels such as Inbox, Trash, etc. cannot be renamed.
A successful request will return the full updated label object.
Labels can be deleted by issuing a DELETE request to the label’s URI. A label can be deleted even if it still has associated messages.
A draft is a special kind of message which has not been sent, and therefore its body contents and recipients are still mutable. The drafts endpoints let you read and modify existing drafts, create new drafts, send drafts, and delete drafts. Draft modifications are propagated to the mailbox provider in all cases, excluding Microsoft Exchange systems.
Responses are encoded as UTF-8 JSON objects with the following attributes:
Note about creating drafts
All body params optional though if omitted, an empty draft will still be created. A successful response will contain the newly created draft object. At least one recipient in
to,
cc, or
bcc must be specified before sending.
Attachments
Creating a draft will fail if the files with the referenced
file_ids have not been uploaded. See Files for more details on how to upload and reference attachments.
Replies
If the draft is a response to an existing message, you should provide the message's ID as a
reply_to_message_id attribute and omit the
subject parameter. Note that you still must explicitly specify the message's recipients in the
to,
cc or
bcc fields of the post body. (This is by design to prevent any ambiguity about whom the message will be sent to.)
Aliases
If you would like to use an alias for sending and/or receiving emails, you can optionally set the
from or
reply_to fields. Note that if the given address is actually not an alias, most SMTP servers will either reject the message or silently replace it with the primary sending address.
If the
from and
reply_to fields are omitted, the account's default sending name and address will be used.
The request body must contain the
version of the draft you wish to update. Other fields are optional and will overwrite previous values.
Updating a draft returns a draft object with the same
id but different version. When submitting subsequent send or save actions, you must use this new version.
Drafts can be deleting by issuing a DELETE request to the draft’s URI. The request body must contain a JSON object specifying the latest version, or deletion will fail. This is to prevent accidental deletion of drafts which have been updated.
The Nylas platform provides two ways to send messages: either through sending an existing draft, or by sending directly. Both systems send mail through the account's original SMTP/ActiveSync gateway, just as if they were sent using any other app. This means messages sent through Nylas have very high deliverability (i.e. not landing in Gmail's promotions tab), but may also be subject to backend provider rate-limiting and abuse detection. Make sure to send wisely!
Sending timeouts
A successful request to the send endpoint can sometimes take up to two minutes for self-hosted Exchange accounts, though the average send time is around 2 seconds. We recommend that you set a minimum timeout of 150s to ensure that you receive a response from us.
All sending operations are synchronous, meaning the request will block until the draft has succeeded or failed. In the event of failure, the sending API will not automatically retry.
We recommend that you apply backoff when HTTP 503s are returned. Your app may need to wait 10-20 minutes, or SMTP servers may continue to refuse connections for a particular account. For some providers like Gmail, there are hard limits on the number of messages you can send per day.
If large-volume sending continues to fail for your application, we recommend switching to a transactional sending service like Mailgun, Sendgrid, Mandrill, or Amazon SES.
Sending Errors
Sometimes message delivery can fail if the user’s email gateway rejects the message. This could happen for a number of reasons, including illegal attachment data, bad credentials, or rate limiting. If your message is sent successfully, the server will respond with an HTTP response code of 200 OK. If your message couldn’t be sent, the server will respond with an appropriate error code.
In addition, the response body contains a JSON object with information about the specific error, including the following attributes:
See Errors for more information about error responses and how to handle them.
Sending inline images
You can send image attachments inline in the body of your emails. Simply reference one of the attached
file_ids in an
img tag. For example, if one of your attachments has
file_id = '472ypl5vqbnh0l2ac3y71cdks' then you could display it inline like in the example to the right. Don't forget to prepend the
file_id with
cid: for the
src attribute.
<div> <p>Before image</p> <img src="cid:472ypl5vqbnh0l2ac3y71cdks"> <p>After image</p> </div>
Setting the sender's name
Some mail servers allow you to set the sender name when sending mail. Nylas will pass along the default
name stored on the /account automatically. You can also override this default name each time you send an email by passing a
from parameter. See Sending directly for more info.
Sender name support
Not every mail server will respect this value. You might run into an issue where you set the "From" name but the message that is sent doesn't have that value. You might need to reach out to your mail provider to see what default name they use, and if they support overriding this default name.
Sending directly
Messages can be sent directly, without saving them as drafts beforehand.
Sending raw MIME
You can also send via submitting raw MIME message object. The submitted object is entirely preserved, except the
bcc header which is removed. Additional headers used by Nylas may be added.
If the message is in reply to an existing message, you should make sure to include the
In-Reply-To and
References headers. These headers are set to the
Message-Id header of the message you are replying to and
Message-Id header of related messages, respectively. For more details, see this excellent article by djb.
A successful response will include a full new Message object.
The files endpoint manages data attached to messages. It allows you to download existing attachments from messages and threads, as well as upload new files to be sent. Note that before creating or modifying a draft to include an attachment, you must upload it via this API and use the returned file ID.
Actual attached files may be relatively large (upwards of 25MB), so this API has separate endpoints for requesting file Metadata and Downloading the actual file.
Files can be downloaded by appending
/download to the file metadata URI. If available, the response will include the filename in the
Content-Disposition header.
The Upload endpoint is used to transfer files to Nylas, which must be done before adding them to a draft message. Data should be sent as multipart-form data with a single field named file.
Filtering and Files
This endpoint supports Filters and Pagination, which allow you to fetch multiple files matching specific criteria and iterate through a large set of files. See the query parameters in the endpoints below to see what kind of filtering is supported.
Popular Content-Type values corresponding to popular file extensions
Here is a more complete list of Content Types.
/files
Access to file metadata
Responses are encoded as UTF-8 JSON objects with the following attributes:
/files
Uploading files
This endpoint is used to transfer files to Nylas, which must be done before adding them to a draft message. Data should be sent as multipart-form data with a single field named
file.
A successful upload will return an array with a single file object. This object's ID may be attached to a Draft by appending it to
file_ids array of the draft object. Additionally, if the object is an image it may be included inline in the body of the email by referencing it inside an
img tag like so:
<img src="cid:file_id">. See Sending for more info about inline images.
Each account connected to Nylas can have zero or more calendars, and each calendar has a collection of individual events. The calendar object is very simple, and mostly serves as a container for events. The
read_only flag on a calendar indicates whether or not you can modify its properties or make changes to its events.
The calendar endpoint supports Pagination (although most users have only a few calendars) and Views.
Provider-backed calendars
The primary calendar of the account is usually named the same as the email address, or sometimes simply called "Calendar." Users may also have other custom calendars, or access to shared team calendars. See [[events]] for more details.
Events can be viewed, added, modified, and deleted on calendars where
read_only is false. Changes are automatically synced back to the provider.
The "Emailed events" calendar
All accounts also include a special calendar called "Emailed events" which contains event invitations that have been sent to the user's mailbox. This calendar is read-only, meaning events cannot be added, updated, or deleted. However, the events can be RSVP'd to. See the [[events]] documentation for details.
/calendars
Responses are encoded as UTF-8 JSON objects with the following attributes:
Events are objects within a calendar, generally supporting all features of modern scheduling apps. Using the calendar APIs, your application can schedule events, send meeting invitations, RSVP, and more.
Events supports Filtering and Pagination
The events endpoint supports filters, which allow you to fetch multiple events matching specific criteria, as well as pagination. See the Events endpoint query parameters for specific query parameters you can use to filter Events.
Event subobjects
There are various subobjects within the Event object itself. To learn more about each of these subobjects see Event Subobjects
Recurring events
Using the
expand_recurring URL parameter is an easy way to expand recurring events server-side so your application doesn't need to deal with RRULEs. Note that when using this query parameter, you must also use filters to specify a time range.
Currently, these expanded instances of recurring events are read-only. If the recurring event has individual modifications (overrides), such as a one-off time change, we will return these as individual events regardless of whether
expand_recurring is set or not.
If
expand_recurring is not set, we will return any one-off cancellations in addition to the base event, for apps that are expanding the recurrence client-side. A cancellation has the field
cancelled set to
true.
Note about event sorting
Events are always sorted by their start date.
Responses are encoded as UTF-8 JSON objects with the following attributes:
Note about event sorting
Events are always sorted by their start date.
Responses are encoded as UTF-8 JSON objects with the following attributes:
Note about read_only and recurring event updates
Updating and deleting an event is managed in a similar fashion to other endpoints with the restriction that
read_only events cannot be updated and events cannot be updated or deleted from a
read_only calendar.
Furthermore, updates cannot be made to recurring events yet.
Events can also be deleted from the calendar. Pass the
notify_participants URL parameter to notify those who have been invited that the event has been cancelled.
Event Subobjects
A note about the `object` attribute
For each of the following event sub-objects, you might notice an
object attribute. This is an attribute that is only returned by the Nylas API to easily indicate the type of subobject. You should not send this
object attribute when submitting POST requests to the /event endpoint.
Participants
The
participants attribute is returned as an array of dictionaries corresponding to participants. These include the keys:
"participants": [ { "comment": null, "email": "[email protected]", "name": "Kelly Nylanaut", "status": "noreply" }, { "comment": null, "email": "[email protected]", "name": "Sarah Nylanaut", "status": "no" } ]
Time
The
time subobject corresponds a single moment in time, which has no duration. Reminders or alarms would be represented as time subobjects.
{ "object": "time", "time": 1408875644 }
Timespan
A span of time with a specific beginning and end time. An hour lunch meeting would be represented as timespan subobjects.
{ "object": "timespan", "start_time": 1409594400, "end_time": 1409598000 }
Date
A specific date for an event, without a clock-based starting or end time. Your birthday and holidays would be represented as date subobjects.
{ "object": "date", "date": "1912-06-23" }
Datespan
A span of entire days without specific times. A business quarter or academic semester would be represented as datespan subobjects.
{ "object": "datespan", "start_date": "1815-12-10", "end_date": "1852-11-27" }
Recurrence
If requesting events without
expand_recurring, the events endpoint will only return the 'master' recurring event and only if it falls within the requested time range. The
recurrence attribute contains recurrence info in RRULE form; this tool is helpful in understanding the RRULE spec.
{ "rrule": [ "RRULE:FREQ=WEEKLY;BYDAY=MO" ], "timezone": "America/New_York" },
RSVPing to invitations
The RSVP endpoint allows you to send attendance status updates to event organizers. Note this is only possible for events that appears on the “Emailed events”, which are calendar invitations.
If the RSVP is successful, the event object is returned with your RSVP participant status updated. This endpoint is idempotent: this means that only one email will be sent when you issue the same request with the same body multiple times.
RSVP Errors
Behind the scenes, RSVPs work by sending an email back to the event organizer in iMIP format. Therefore, errors when RSVPing to events follow the same status codes as sending errors.
The Nylas APIs provide access to the user's contacts, making it easy to add contact autocomplete, address book integration, and more to your application.
Version 2.0 only
Our contacts support got a major upgrade in v2.0 of the API. This section pertains to the functionality in v2.0 of the API. If you haven't upgraded your API version yet, check out the API Versioning section.
Contacts supports Filtering and Pagination
The contacts endpoint supports filters, which allow you to fetch multiple contacts matching specific criteria, as well as pagination. See the Contacts endpoint query parameters for specific query parameters you can use to filter Contacts.
The Contact Object
Responses are encoded as UTF-8 JSON objects with the following attributes:
id
string
Globally unique object identifier
object
string
A string describing the type of object (value is "contact")
account_id
string
Reference to parent account object
given_name
string
Given name of the contact
middle_name
string
Middle name of the contact
surname
string
Surname of the contact
suffix
string
Suffix (e.g. Jr., Sr., III)
nickname
string
Nickname of the contact
birthday
string
Birthday of contact in the form YYYY-MM-DD
company_name
string
Name of the company the contact works for
job_title
string
Job title of the contact
manager_name
string
Name of the manager of the contact
office_location
string
Location of the office of the contact - this is a free-form field
notes
string
Notes about the contact
picture_url
string
The URL to endpoint for the contact picture - see GET /contacts/<id>/picture for more details
im_addresses
List[IMAddress]
A list of Instant Messaging (IM) Address objects - see IM Address for more details
physical_addresses
List[PhysicalAddress]
A list of physical address objects - see Physical Address for more details
phone_numbers
List[PhoneNumber]
A list of phone number objects -see Phone Number for more details
A few of the fields on the contact model have lists values. These are lists of different objects: Email, IM Address, Physical Address, Phone Number and Web Page.
type
string
Type of the email address. Could be
work or
personal.
string
The email address. This is a free-form string.
type
string
Type of the IM address. Could be
gtalk,
aim,
yahoo,
lync,
skype,
msn, ‘icq
,jabber`.
im_address
string
The IM address. This is a free-form string.
format
string
Format of the address. Could be
structured or
unstructured. Right now, only structured addresses are supported.
type
string
The type of the address. Could be
work or
home or
other.
street_address
string
The street address which includes house number and street name.
city
string
The city of the address.
string
The postal code of the address.
state
string
The state of the address. This can be a full name or the state abbreviation.
country
string
The country of the address. This can be a full name or the country abbreviation.
type
string
Type of the phone number. Could be
business,
mobile,
pager,
business_fax,
home_fax,
organization_main,
assistant,
radio or
other.
number
string
The phone number. This is a free-form string.
type
string
Type of the web page. Could be
profile,
homepage or
work.
url
string
The web page url. This is a free-form string.
You can get a list of contacts by sending a
GET request to the
/contacts endpoint. This will return a list of
Contact JSON objects.
This list of contacts includes both contacts from the address book of the email provider and contacts that we auto generate from an accounts emails. To separate these out we have a
source filter. Setting
source=address_book will return only contacts from the email provider. Setting
source=inbox will return only the autogenerated contacts that Nylas creates from an accounts emails.
Parameter Percent Encoding
It is important to note that parameter values must use percent-encoding (also known as URL encoding).
Exact Matches
We currently only support filtering on exact value matches in the database. That means the strings you filter on must be the exact strings stored on the contact in the database.
Get a single contact by passing the contact id as a url parameter.
You can create a new contact by sending a
POST request to the
/contacts endpoint. The API will take a Contact JSON object and return a new Contact with a valid id.
You can update an existing contact by sending a
PUT request to the
/contacts/{id} endpoint. The API will take a Contact JSON object and update the contact with the data you supplied. Fields that aren’t defined in the JSON you provided won’t be updated. Fields that are defined in the JSON will be overwritten with the new values.
You can delete an existing contact by sending a
DELETE request to the
/contacts/<id> endpoint.
/contacts/{id}/picture
Some contacts have profile pictures.To get a contacts picture, send a
GET request to the
/contacts/<id>/picture endpoint. If a contact has a picture associated with it, when you make a normal
GET request for the contact, the picture url field will be filled out with the url to send the picture
GET request.
The result is the header information shown above with binary data. If you pipe the result into a data file specified by
Content-Type field in the header data,
jpg for this example, you can open that file and view the picture.
Limitations for Exchange accounts
Because of the way the Exchange protocol works, we unfortunately have to restrict Exchange contacts to these arbitrary limits:
- Exchange contact phone numbers must have a non-
nulltype
- Exchange contacts can only have one or two
homeor
businessphone numbers
- Exchange contacts only support three different email addresses. These addresses will have a type set to
nullby default, but you can change them to be
workor
personal.
- Exchange contacts only support three different IM addresses. These addresses will have a type set to
nullby default.
- Exchange contacts only support a single web page. This webpage will have a type set to
nullby default
Unsupported Phone Types
Search Overview
The search sub-endpoint is used to run a full-text search that is proxied to the account's provider. Results are matched with objects that have been synced, and are then returned.
The search endpoint returns 40 results by default. This endpoint supports Pagination so your application can request more objects, or iterate through all results.
For details on the query syntax for the most commmon providers, please see Google, Exchange/Outlook, Yahoo, and Generic IMAP.
Webhooks Overview
Webhooks allow your application to receive notifications when certain events occur. For example, when a new email is received, Nylas will make a
POST request to your URI endpoint letting you know information about the new message. You can specify what events you'd like to be notified about in the developer dashboard.
Need help getting started with webhooks?
The Webhook object
The webhook object links a callback URL to the types of notifications that should be sent to it. You can programmatically access information about the state of your webhook using the following urls.
A webhook can have the following states:
If Nylas has repeatedly failed to receive a
200 response from your server, your webhook will be marked as failing and we will notify you via the email associated with your Nylas developer account. If our requests continuously fail, we will also notify when the webhook has been disabled.
Note about failed webhooks
If your webhook reaches the
failed state, you can re-activate it from the developer console. However, we will not send any past data once the webhook is re-activated. You will need to manually resync any data that was lost during that time.
Creating a Webhook
Webhooks for an application are configured from the Nylas Developer Dashboard. To configure a new webhook, go to the "Webhooks" section of your application and select "Add Webhook". You will need to provide:
The full URI for the webhook. Since this is the endpoint the Nylas servers send notifications to, the URI must be accessible from the public internet for the webhook to work. It must be an HTTPS endpoint as well. The endpoint is verified by sending a verification request.
The triggers to receive notifications for. The list of available triggers are:
You can create more than one webhook for your application; however, the webhook endpoints cannot be the same. For example, you may create a webhook to receive notifications for the
message.created trigger, and another to receive notifications for the
account.connected and
account.stopped triggers, but the webhooks must have different callback URIs.
Verification Request
Nylas will check to make sure your webhook is valid by making a GET request to your endpoint with a
challenge query parameter when you add the endpoint to the developer dashboard (or anytime you set the webhook state to active). All you have to do is return the value of the
challenge query parameter in the body of the response. Make sure you aren't returning anything other than the exact value of the challenge parameter. Your application has up to ten seconds to respond to the verification request. The verification request is not retried automatically.
Enabling and disabling a Webhook
Webhooks are set to "active" by default. To disable a webhook, go to the Webhooks section of your application and change the webhook's
state to
inactive.
Receiving Notifications
Once a webhook has been successfully created, Nylas will send an HTTPS
POST request to the configured endpoint when an event of interest occurs. If multiple changes occur at the same time, they may be included in the same notification. Please note that we will send webhooks for all messages in a mailbox if the account you are connecting is new to the Nylas system.
A request that times out or results in a non-HTTP 200 response code is retried once every 10 minutes, up to 60 times. Failing that, the webhook is retired and its
state set to "failed".
We strongly recommend processing your webhook data asynchronously from our
POST request to avoid timeouts. We timeout each request at 300 seconds (5 minutes).
Note about webhooks response code
You must return an exact HTTP response code of
200, otherwise Nylas will consider the POST a failure and will retry the request (even if you return a 203, for example).
Each request made by Nylas includes an
X-Nylas-Signature header. The header contains the HMAC-SHA256 signature of the request body, using your client secret as the signing key. This allows your app to verify that the notification really came from Nylas.
The body of the request is in UTF-8 JSON format and contains details about the event.
Notification format
The webhook endpoint will receive a list of changes for the triggers specified while creating the webhook. Note that for security reasons, the actual changes are not included. Instead, the response contains the
trigger type and object
id for each change; you can retrieve the changed object with the corresponding API endpoints.
The body of the POST request is encoded as a UTF-8 JSON object with the following attributes:
Each delta object has the following attributes:
The attributes sub-object has extra information about the
object. Currently,
attributes is only included in
object_data for
message.created triggers.
For more information about message tracking and the corresponding
metadata object type, see Message Tracking.
The Nylas Sync Engine builds a transaction log that records every change as it synchronizes your users' mailboxes. Your application can use these changes, exposed through the Delta endpoint, to build email applications that process new data quickly without fetching an index of the user's mailbox or performing a large number of API calls.
To use the Delta API, your application needs to maintain a sync
cursor, a record of the last change you successfully processed. Each time you perform a sync, you process the deltas provided by the API and update your stored
cursor.
Obtaining a Delta cursor
The first time you sync using the delta API, you need to obtain a cursor. In subsequent requests, your app will pass this cursor and receive changes (i.e. deltas) from the moment you request the cursor onwards.
A note about cursors
Note that the first time you request a cursor for an account starting its initial sync with Nylas you will receive deltas for the account's entire email history.
Requesting a set of Deltas
Each time your application syncs with Nylas, you provide the cursor indicating your position in the user's mailbox history. The API request below returns a set of JSON objects representing individual changes to the user's mailbox: folder and label changes, new messages, etc.
After processing these deltas, your application should update its stored
cursor to the value of
cursor_end in the response. The deltas endpoint often only returns a subset of deltas available, so it's important to continue requesting deltas until your application receives a value for
cursor_end that is identical to
cursor_start. This indicates that you have requested all delta events up to the current moment.
The delta for an object will condense changes to the latest version of the object since the requested cursor. This means that, if an object was modified and then deleted, only a delta representing the
"delete" event will be returned.
You can see more detail about how to request a set of Deltas here.
Note about event deletions
Long-polling Delta updates
Normally a request with the latest cursor will immediately return an empty list with no deltas. However, you can use the long-polling endpoint to instruct the server to hold open the request until either new changes are available, or a timeout occurs. This behavior can be used to provide real-time updates on platforms that do not support partial response parsing, such as web browsers.
You can see more detail about how to use long-polling with Deltas here
Streaming Delta updates
If you are building a server-side application where you can parse incoming data before a request has finished, then you should use the streaming delta endpoint, which allows you to process changes in real time without polling. This will start a HTTP connection that will return deltas starting from that cursor, and then keep the connection open and continue to stream new changes.
You can see more detail about how to stream Delta updates here
If y | https://docs.nylas.com/reference | 2018-02-18T02:51:40 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.nylas.com |
Discovering Microsoft SQL Servers Discovery uses the SQL SMO object to get information about all Microsoft SQL Server instances running on the same system. Requirements The Microsoft SQL Server probe is triggered by a running process that has a command that contains sqlservr.exe. MID Server Host Install .Net 3.5 and 4 from Microsoft Install the latest version of the Microsoft SQL Server management library (SMO). Note: The SMO requires the Common Language Runtime (CLR) library to be installed first. Both libraries can be downloaded from the Microsoft website. Install PowerShell v2.0 and above. Microsoft SQL Server Host Install the Remote Registry Service on target computers running Microsoft SQL Server. Credentials Add valid Windows credentials (type=Windows) to the Discovery Credentials table. Ensure credentials have the public access level. See also Discoverying MSSQL and Windows Credentials. Domains Install the MID Server host and the Microsoft SQL Server host on the same domain or, if they are on different domains, enable a trust relationship between the domains such that users in the Microsoft SQL Server host domain are trusted by the MID Server host domain. If a domain trust relationship is in place, do not install the MID Server on a domain controller. SQL Server process classifiersDiscovery uses the SQL and SQL Express process classifiers to determine which version of Microsoft SQL is present on a computer.Discovery Dictionary entries for Microsoft SQL DiscoveryDiscovery writes information to the Microsoft SQL Instance table. | https://docs.servicenow.com/bundle/geneva-it-operations-management/page/product/discovery/reference/r_MicrosoftSQLServers.html | 2018-02-18T03:10:12 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.servicenow.com |
Configure the signature key Configure the signature key after installing the proxy server through the Edge Encryption proxy installer. About this taskThe signature key signs changes to configurations and properties made by the proxy server. The signature key must be an asymmetric RSA key pair in a JCEKS KeyStore. Note: If installing multiple proxies, each proxy must use the same signature key. Procedure On the Signature Key page of the Edge Encryption installer, select the keystore on the host machine to store the signature key. Create New Java KeyStore: Enter the directory location, name, and password for the new keystore. Use Existing Keystore: Enter the keystore file location and password. Click Next. Select or create a signature key. New Key: Create a signature key for this proxy. Use Existing Key: Use an RSA key-pair from the selected keystore. Import Existing Key: Import an RSA key-pair from a different keystore. Browse to the keystore file, enter the password for the keystore, and select the key alias. Provide a new alias for the key. Click Next. Previous TopicInstall the Edge Encryption proxy serverNext TopicConfigure the HTTPS certificate | https://docs.servicenow.com/bundle/jakarta-servicenow-platform/page/administer/edge-encryption/task/configure-sig-key.html | 2018-02-18T03:09:15 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.servicenow.com |
Error getting tags :
error 404Error getting tags :
error 404
set the printerSettings to string
get the printerSettings
set the printerSettings to the cSavedPrintSettings of me
put the printerSettings into url ("binfile:" & printerSettingsFile())
Use the printerSettings to get or set the device-specific settings for the current printer.
Value
The printerSettings is an opaque binary string containing the settings and printer name.
Setting the printerSettings will attempt to choose and configure the current printer with the supplied settings. If the printer is found but the settings are not valid, the printer will be chosen and configured with the default values. If the printer is not found, the result will be set to "unknown printer".
Setting the printerSettings to empty will reset the printer to the system default and all printer settings to the default for the printer.
To have an effect, this property must be set before calling open printing. | http://docs.runrev.com/Property/printerSettings | 2018-02-18T03:24:55 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.runrev.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.