content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
.:
Overrides SelectQueryInterface::addField
File
- core/
includes/ database/ select.inc, line 740
Class
- SelectQueryExtender
- The base extender class for Select queries.
Code
public function addField($table_alias, $field, $alias = NULL) { return $this->query->addField($table_alias, $field, $alias); } | https://docs.backdropcms.org/api/backdrop/core%21includes%21database%21select.inc/function/SelectQueryExtender%3A%3AaddField/1 | 2022-01-17T01:39:06 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.backdropcms.org |
Security is a top priority for us. If you=E2=80=99ve discovered a securi= ty vulnerability in Blesta we hope you=E2=80=99ll share it with us in a res= ponsible and discrete manner.
Repo= rting a Security Vulnerability
If you=E2=80=99ve discovered a potential security vulnerability in Blest= a, please email us at. We take matters of s= ecurity very seriously and will work with you to resolve the issue as quick= ly as possible.
If your report reveals a previously unknown and undisclosed vulnerabilit= y, and you act in good faith, allowing us reasonable time to correct the is= sue without publicly releasing any information, we=E2=80=99ll credit you by= adding your name to this page.
Under no circumstance should you to attempt to test for= exploits in any of our live systems. Such an act is malicious, and will be= treated as such, whether or not it reveals or exploits any vulnerability.<= /p>
What Qualifies as a Security Vulnerability
We only consider vulnerabilities with the Blesta software product. Pleas= e (h= ttps://)
- Virendra Yadav ( ) | https://docs.blesta.com/exportword?pageId=3145841 | 2022-01-17T00:53:07 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.blesta.com |
Create computer groups
Before you can protect and manage computers, you need to create groups for them.
To create groups:
- If Sophos Enterprise Console is not already open, open it.
- In the Groups pane (on the left-hand side of the console), ensure that the server name shown at the top is selected.
- On the toolbar, click the Create group icon.A new group is added to the list, with its name highlighted.
- Type a name for the group.
To create further groups, go to the left-hand pane. Select the server shown at the top if you want another top-level group. Select a group if you want a sub-group within it. Then create and name the group as before. | https://docs.sophos.com/esg/enterprise-console/5-5-2/help/en-us/esg/Enterprise-Console/tasks/AGCreate_computer_groups.html | 2022-01-17T01:57:41 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.sophos.com |
Notification
React Native Push notification setup guide
Android
gradle changes
Open android/app/build.gradle and add below at bottom
apply plugin: 'com.google.gms.google-services'
Open project android/build.gradle and add google service dependency.
dependencies { ...... classpath 'com.google.gms:google-services:3.0.0' }
FCM/GCM setup
Follow below documentation for FCM/GCM setup.
iOS
Open AppDelegate.m file under /ios/YOUR_PROJECT/ and add code as mentioned in the following documentation.
Updated 4 months ago
Did this page help you? | https://docs.applozic.com/docs/react-native-push-notification | 2022-01-17T02:05:35 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.applozic.com |
Amplify Platform Management Save PDF Selected topic Selected topic and subtopics All content Organization concepts Concepts about organizations such as subscriptions, users and roles, service accounts, and teams. 4 minute read This section details which capabilities an organization can use, which assets a user can consume or what actions a user can take available from the Organization tab. To access the Organization tab, sign in to the Amplify platform and select Organization from the User dropdown menu. Organizations Let’s start from the beginning and look at what an organization is in the Amplify platform . An organization can be seen as a single instance of the Platform. An organization is uniquely identified by an Organization Identifier (Org ID). As a best practice, an organization is a company, and the teams concept (see further below) is used to have a clear separation of assets between working groups such as departments, projects, and individual users. Subscriptions An organization is linked to one or more subscriptions. Subscriptions define the platform capabilities that the organization is allowed to use. For example, the test organization has an Enterprise subscription to Application Development and a terminated trial subscription to Application Integration. When you sign up for a trial of the platform, then an organization is automatically created and you become the administrator that manages the organization with a default list of subscriptions. When a customer purchases our platform capabilities, then an Axway administrator creates an organization, assigns the correct subscriptions, and makes a user of the customer an administrator of the Platform. Child organizations An organization can have child organizations. These child organizations are allocated resources from the parent organization. An administrator of the parent organization can create child organizations and manage both the parent and child organizations. Multiple organizations A user (referred to as a member in the platform) can belong to multiple organizations. Select Switch Org from the User & Org menu to see which organizations you belong to and to switch to a different organization. Organization users and roles Each organization has one or more users. At least one user needs to be an administrator. If a new organization is created, then the first user becomes an Administrator. An administrator can change the roles of the users, with the restriction that there always needs to be at least one user that has the Administrator role. The test organization currently contains five users. We can distinguish the following types of roles in the Platform: Platform Roles - a role that applies to all the capabilities of the platform and is mutually exclusive. You can only have 1 platform role, such as Administrator. This role can be different per organization to which you are a member. Service Roles - roles that are specific to a capability such as Amplify Central or Flow Manager. These roles are not mutually exclusive. A member can for example have one role in Amplify Central and three roles in Flow Manager for a specific organization. Team Roles - roles that define what a user is allowed to do with the assets of a team. Part of the team roles are mutually exclusive and part of them are not. The roles that you have in a specific organization can be seen in the Orgs & Roles page. The test organization shows a user belongs to 19 organizations and has specific roles per organization. Service accounts A service account is a technical account that can be used by an application (not a user) to authenticate against different platform capabilities. Similar to users, service accounts can be linked to roles and can be assigned to one or more teams. A service account can have any role except for the platform roles. The authentication mechanism for a service account is different than how a user is authenticated. A different authentication method is needed because service accounts can be used in headless operations. Service accounts authenticate with a certificate or a secret. You must have the Platform Administrator role to manage service accounts. With the Platform Developer role you are able to view the service accounts and with the Platform Consumer role you have no access to the service accounts. Teams Users can belong to one or more teams or not belong to any teams at all. A team is a logical grouping of users and assets. The idea is to enable you to create teams so that certain groups of people can work together on and use the same assets. A team belongs to one organization and the members of a team also need to be a member of that organization. The same user can belong to multiple teams and can also have a different role in each team. Each organization always has a default team. When creating items such as API Proxies in Amplify Central or Unified Catalog Assets one team always needs to be chosen as an owner. Only members of the owning team can make changes or remove the items. The following is an example showing the owning team of Unified Catalog items. Unified Catalog items can be shared with other teams. The teams need to be belong to the same organization. The other teams can then discover and consume those items, but they cannot make changes to them. Last modified September 3, 2021: Use variables (#29) (9952048) Related Links | https://docs.axway.com/bundle/platform-management/page/docs/management_guide/organizations/organization_concepts/index.html | 2022-01-17T02:02:31 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.axway.com |
Bridge
The Polygon sidechain is external to Ethereum. To move ETH and other Ethereum-based tokens back and forth between these chains, one must use the Polygon Bridge.
We’ll show you step by step how to “bridge” tokens from the ethereum network to Polygon (Matic).
First, we must navigate to the official Polygon network bridge website:
Polygon
Polygon
Official Polygon Bridge:
Log in with
Metamask
(
we recommend metamask for trading on IDEX
).
This guide provides steps for Metamask only
Metamask will request your Signature to connect to the Polygon Bridge.
Click on [Sign] to continue
After signing your transaction you’ll be shown a few choices. This time we want to use the
“Polygon Bridge”
.
Here you’ll be able to “bridge” tokens from the Ethereum network to Polygon
Select the token and amount you want to bridge, in this example we use “Ether”.
Please note that your wallet must have enough Ether to cover the gas fees of the transaction.
Once you’ve set the amount of tokens you want to bridge, click on [Transfer]
After clicking on [Transfer] you will receive a couple of notifications, one of these showing you the estimated gas fee you’d pay for the transaction.
Read the notification and click [Continue]
Acknowledge the estimated gas fee and click [Continue]
Click [Continue] to start the bridging process
Once you've started the bridging process (as shown above) a new Metamask window will appear asking you to confirm the transaction.
Click on [Confirm] to continue.
The bridge process will begin. This could take a couple of minutes depending on network traffic and gas fees paid.
Please be patient while the transaction is confirmed
The transaction will first be “Confirmed” but not yet Completed*.
Once "Confirmed" you can close this window.
This can take a few minutes, you can close the small pop up window and watch the process complete by clicking on [Pending] at the top of the website:
The pending screen shows the status of your bridging transfer
The bridging transfer can take some time, in this example it took 19 minutes
Once completed, the tokens have been successfully bridged to Polygon!
You can now click on [Switch to Polygon] to start operating on IDEX!
IMPORTANT:
You may not immediately see your new tokens on Metamask once you open it after bridging the tokens.
The Metamask UI doesn't show your bridged tokens
Don’t panic!
You just need to “add” them to the interface. To do so, you need the address of the token you just bridged, in this case “wETH” (
wrapped Ethereum
). You can find the address on the Polygonscan explorer website.
Click on the three dots at the top, select [View in Explorer]
After clicking on [View in Explorer], a new browser window will open showing your wallet’s transactions. Click on [ERC-20 Token Txns] and you will see the tokens you just bridged.
You can see your bridged tokens here
Click on the token name at the right to find the smart contract address for each and copy it. Now open Metamask and click on [ADD TOKEN].
Metamask will auto-populate the fields after pasting the contract address
Paste the contract address in its respective field. The data will populate, click [Next] and you will see the token in the Metamask interface. Repeat this process for other tokens you might want to bridge such as the IDEX token.
Now you see your balances!
If you have any questions about bridging tokens to Polygon to use on IDEX please contact us using the
Live-help button on our website
.
Arbitrage
Next - FAQ
Escape Hatch
Last modified
1mo ago
Copy link | https://docs.idex.io/faq/polygon-bridge | 2022-01-17T00:40:46 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.idex.io |
The Data Migration task powers all the ELT work that takes place in Loome Integrate. Data Migrations are easy to configure and are a simple UI version of what can be done with a set of SQL Queries. You can also easily modify and fine-tune a data migration to meet your requirements.
Data Migrations simply consist of two connections, the source and the target. Once you’ve selected the Data Migration task type from the task form, the second page will prompt you to select the two connections used in the migration.
Loome Integrate will guide you through selecting the source and target connections and possibly schemas/file definitions (if the connection type supports it), the agent loading available entities such as schemas, tables, columns and other pieces of metadata as you progress through the system.
If you have created a file system file definition and want to use it as a source or target in your migration, click the file icon next to either the source or target connection field and the UI will switch to the file definition view where you can easily pick an available definition from the drop down.
If you provide a custom output schema for the migration, Loome Integrate will create the schema on migration.
Ticking the “Show Advanced Options” checkbox will display additional options you can use to configure your Data Migrations behaviour.
Enabling Parallel Data Migration will allow the Loome Integrate agent to migrate multiple tables over in parallel. This can result in faster migrations as more tables can be brought over in a shorter amount of time.
The “Max Degree of Parallelism” sets how many tables can be migrating concurrently at any given time. For example if you set this to 4, Loome Integrate will limit itself to only migrating up to 4 tables at a time meaning that additional tables would be migrated once a table migration completed in the currently running migrations.
Configuring Max Degree of Parallelism can be touchy and it is recommended that you work your way up to higher degrees of parallelism rather than choose an obscenely high number. The recommended starting value is 8.
When using a File Definition or a Blob Storage connector in a data migration task, you will have the option to choose the maximum number of rows of the flat file that will be imported. The column lengths will be determined using this sample.
If you want to use the maximum column length then please set this number to 0.
Once you’ve picked both a source and target connection, moving onto the next page will show you the migration builder. The builder splits the screen into top and bottom, source and target respectively.
Building a migration is as easy as finding the tables you wish to migrate in the top half of the screen, clicking the add button next to the table and seeing how it gets pushed to the target list in the bottom half.
If you would like to filter through a large list of tables in either of your source or target tables, just type in your filter keyword in the field above each list (as highlighted in blue below).
You have the option to choose from Table, View, Query and Detect mode.
For many of the above examples, we have used tables. When you choose to extract and load your data using tables, Loome Integrate will provide a list of the available tables in your schema.
You can select the tables you would like to migrate by clicking on the ‘+’ icon on its right. You can deselect a table by clicking on the ‘-’ button on the right of a table in the target section. This will move it back to the source table list.
You can select all tables by clicking on the select all button at the top of the list.
You can also filter your list of tables by writing your search term in the search bar at the top of each source and target list.
Loome Integrate also supports sourcing data from stored queries, commonly referred to as ‘Views’.
A database view is defined by a query that is an object of rows and columns selected from a database, so this means a view is a subset of a database.
All views that are available in your selected source schema will display under the ‘View’ tab.
You can select a view in Loome Integrate by clicking on the ‘+’ icon on its right. This view will then appear in the target section of this data migration task, ready to be extracted and loaded into your selected data target.
Again, you can select all views by clicking on the select all ‘+’ button at the top of the view list.
You can filter the list of views by writing a search term in the search bar at the top of the list of views and target list.
If you wish to use a SQL Query as a source table, click the Query tab of the Source area.
Here you are given a text editor where you can insert a SQL Query that can be used to mock a table to migrate.
Query as a Source tables are not locked to the source schema and so you will need to specify the schema you are querying from as part of the query.
For instances where the tables/files in a source may grow with time, Data Migrations can be configured to use Detect Mode which ensures that additional objects that are added to the source are automatically imported with every migration.
An example of when this may be useful is for importing from a folder of flat files which gets additional files every so often-Loome Integrate Online will retrieve all the available files from the folder and add them automatically to the Data Migration task.
Detect Mode also supports an optional Regex based Filter Pattern which can be used for only importing objects which match the pattern. For example if you wanted to only import objects that began with the word “Sales” you could use the pattern
^Sales.
You’ll notice that target tables have a set of actions associated with them. Besides the minus to remove the table from the target, the other two actions from left to right include:
By default, Data Migrations will drop any matching target tables to the source. If Incremental Config is enabled, the migration will insert the new records into the existing table.
Once “Incremental” is enabled, you are given the option to define a currency column and how often this column will “refresh” with new data being brought in.
The Currency Column is the column Loome Integrate will check to determine what records need to be brought in as part of the migration and which records are considered to be already migrated. This column by default can be either a numeric value (such as a primary key) or a DateTime value.
Once a currency column is selected, you can set the conditions for how to compare the data in the source with the target based on that column. The Refresh Period Type allows for you to set what measurement you use for comparison, whilst the Refresh Period is the threshold used for determining what records shall be migrated.
If you want to do basic numeric comparison for a currency column, use “TransactionID”.
LoadDateTime fields will be stored in UTC format in your database.
Using the Select Columns configuration, you can easily migrate specific columns from the source to the target table. This is as easy as opening up the select columns menu and checking the columns you wish to migrate.
In tasks that use connections such as Aconex V2 Application and Microfocus ALM, you may need to define columns and extract fields from a data column, such as the data column in the image below.
Once you have migrated your data, you can define its columns.
In this example we have chosen the ‘Defects’ table, and then decided to define the field ‘detection-version’ from the ‘Data’ column in the ‘Defects’ table as shown in the image above.
Click on the Select Data Column button beside the table you have selected as your target table.
Find the field you want to define in your source data column and provide its details here.
First, provide the Column Name.
You will then provide the Column Data Type of this field. This depends on the type of data of the field, such as datetime, varchar or int.
Provide the Source Field Path.
If you are using XML, you can use the XPath format for the Source Field Path.
If using JSON, you can use the format in the following example.
Please note this is case-sensitive for JSON and XML.
To get the source field path, you can follow the example of the path structure in the image below.
In the image below, ‘detection-version’ is the first field so the value is 0 as we want to pick the first instance. There is no field value for ‘detection-version’ so that is also 0. If the next field was called ‘Subject’ it would be ‘Data[1]‘, and so on for the next fields in this source data column.
Then select the Query Format. This step is optional, but if you would like to load data incrementally you will need to provide this.
This is the API query string and will be used as an incremental filter.
It must contain a
{Value} string in the query format.
The query format will differ depending on the source connection.
In the image below, we have used a Microfocus ALM connector, and it is in the format,
COLUMN[OPERATOR(Value)].
Add the column, and repeat this process if you would like to add other columns.
Save these target table columns, and you can then either add more tables or submit this task.
Next, run this task and you will have a new table with the new columns we specified above.
You can see that ‘detection-version’ now has its own column.
Once we added more columns and ran the the rule, the target table also included those new columns.
You can then change the table name and set your incremental configuration using the Migration Configuration button on the right of a row.
To change the name of your target table, enter your chosen name into the Target Name field.
When you have defined data columns and would like to run an incremental data migration, you will need to use the query format that was provided above when defining data columns.
You can then select this column as your Currency Column. For this example, we will select ‘id’ as we set a query format for this column.
We also set the Refresh Period to ‘1’, and the Refresh Period Type to ‘TransactionID’ as it is an ID column.
If we were to use the column ‘creation-date’ as our currency column, we could set the incremental refresh period to 1 day.
Save and submit this task, and when it is next run it will load only newly added rows since its previous execution.
Tables can be filtered using a Target Filter. We have provided a simple interface where you can select available columns in a table and provide a value to use as your filter.
Once you have created a Data Migration task and selected your tables, there will be an option beside the table called Target Filter.
In this modal, you can select your column under Column Name from the drop-down list.
Then select your Comparison type from the drop-down list.
For the
LIKE operator, Loome Integrate Online Supports standard SQL wildcard patterns as explained here.
Then provide a Value to filter the columns.
Add the filter using the Add Filter button beside it.
It will appear below (if there is more than one filter it will appear in list form), and you can delete filters using the button beside it.
Save the filters and once you submit the task, you will import only the rows that are relevant to your filters.
You can view how many rows were migrated in the Execution log.
You can edit filters by clicking on Edit Task and going back to the Target Filter menu.
When editing a Task you can view and edit the Source Query by clicking the View Query in the Target section next to your selected Source Query, and in this pop up window you can then edit the query. | https://docs.loomesoftware.com/integrate/online/tasks/task-types/data-migration/ | 2022-01-17T01:17:01 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.loomesoftware.com |
Management node overview
You can use the management node (mNode) to use system services, manage cluster assets and settings, run system tests and utilities, configure Active IQ for system monitoring, and enable NetApp Support access for troubleshooting.
For clusters running Element software version 11.3 or later, you can work with the management node by using one of two interfaces:
With the management node UI (
https://[mNode IP]:442), you can make changes to network and cluster settings, run system tests, or use system utilities.
With the built-in REST API UI (
https://[mNode IP]/mnode), you can run or understand APIs relating to the management node services, including proxy server configuration, service level updates, or asset management.
Install or recover a management node:
Access the management node:
Perform tasks with the management node UI:
Perform tasks with the management node REST APIs:
Disable or enable remote SSH functionality or start a remote support tunnel session with NetApp Support to help you troubleshoot: | https://docs.netapp.com/us-en/element-software/mnode/task_mnode_work_overview.html | 2022-01-17T00:56:02 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.netapp.com |
Import.
You can add new Self Service Portal users by importing a UTF-8 encoded comma-separated values (CSV) file with up to 500 users.
Note Use a text editor for editing the CSV file. If you use Microsoft Excel, values entered may not be resolved correctly. Make sure that you save the file with extension .csv.
Tip A sample file with the correct column names and column order is available for download from the Import users page.
To import users from a CSV file:
- On the menu sidebar, under MANAGE, click Users, and then click Import users.
- On the Import users page, select Send registration emails.
- Click Upload a file and then navigate to the CSV file user accounts. | https://docs.sophos.com/esg/smc/8-5/admin/en-us/esg/Sophos-Mobile/tasks/SSPImportUser.html | 2022-01-17T01:04:04 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.sophos.com |
RadDataForm Overview
RadDataForm for NativeScript helps you edit the properties of a business object during runtime and build a mobile form fast and easy. All you have to do is set a business object as a value for the
source property and
RadDataForm will automatically generate editors for each property of the source object.
RadDataForm offers built-in editors for each primitive type and also has various features to help you create your desired form.
Figure 1: How RadDataForm can look on Android (left) and iOS (right)
Getting Started
The following articles contains everything you need to know to start using
RadDataForm. First, you need to provide the source object. Then you can describe the properties of the source in order to use the desired editors. Finally, you need to get the result from the user's input.
Editors
RadDataForm for NativeScript allows you to select a proper editor for each property of your source object and optionally customize it according to your preferences. You can start with the overview page of the editors which demonstrates their common usage. Then you can have a look at the complete list with available editors and if none of them fulfils your requirements, you can create your custom editors.
Groups
You can easily combine the editors in groups and optionally allow them to be collapsed. More information is available here. Once the editors are grouped, you can easily change the layout that is used for each group. More information is available here.
Validation
If you need to validate the user's input before it's committed, you can use some of the predefined validators. Here's more information about the validation in
RadDataForm for NativeScript. This article contains the full list of available validators and if they are not enough you can create you custom validators. To control when the validation is performed you can change the validation mode. Here's more about the events that you can use to get notified when validation occurs.
Image Labels
You can easily add an image to each editor that hints for its purpose instead of the default text that is displayed for each editor. You can read more about the image labels here.
ReadOnly
If you need to use the form to simply show the content of the source object without allowing the user to edit the content you can make the form read only or just disable specific editors that shouldn't allow editing. You can read more here.
Styling
You can change the style of each of the editors of
RadDataForm and also the style of the group headers if grouping is enabled. You can read more about the customization options here. | https://docs.nativescript.org/ui/professional-ui-components/DataForm/dataform-overview | 2019-06-15T23:06:24 | CC-MAIN-2019-26 | 1560627997501.61 | [array(['../../img/ns_ui/dataform-overview-android.png',
'Overview of DataForm in Android NativeScriptUI-DataForm-Overview-Android'],
dtype=object)
array(['../../img/ns_ui/dataform-overview-ios.png',
'Overview of DataForm in iOS NativeScriptUI-DataForm-Overview-iOS'],
dtype=object) ] | docs.nativescript.org |
A transaction is a set of operations executed as a single unit. It also can be defined as an agreement, which is carried out between separate entities or objects. A transaction can be considered as indivisible or atomic when it has the characteristic of either being completed in its entirety or not at all. During the event of a failure for a transaction update, atomic transaction type guarantees transaction integrity such that any partial updates are rolled back automatically.
Transactions have many different forms, such as financial transactions, database transactions etc.
From the ESB point of view, there are two types of transactions:
Distributed transactions
A distributed transaction is a transaction that updates data on two or more networked computer systems, such as two databases or a database and a message queue such as JMS. Implementing robust distributed applications is difficult because these applications are subject to multiple failures, including failure of the client, the server, and the network connection between the client and server. For distributed transactions, each computer has a local transaction manager. When a transaction works at multiple computers, the transaction managers interact with other transaction managers via either a superior or subordinate relationship. These relationships are relevant only for a particular transaction.
For an example that demonstrates how the transaction mediator can be used to manage distributed transactions, see Transaction Mediator Example.
Java Message Service (JMS) transactions
In addition to distributed transactions, WSO2 ESB also supports JMS transactions.
For more information on JMS transactions, see JMS Transactions. | https://docs.wso2.com/display/ESB490/Transactions | 2019-06-15T22:43:28 | CC-MAIN-2019-26 | 1560627997501.61 | [] | docs.wso2.com |
ComparableModelBase
The
ComparableModelBase class extends the
[ModelBase](/5.7/catel-core/data-handling/modelbase/) class with default equality comparer members. This logic has been moved to a separate class to improve the out-of-the-box performance of the
ModelBase class.
More documentation should be written in the future
Have a question about Catel? Use StackOverflow with the Catel tag! | http://docs.catelproject.com/5.7/catel-core/data-handling/comparablemodelbase/ | 2019-06-15T22:43:32 | CC-MAIN-2019-26 | 1560627997501.61 | [] | docs.catelproject.com |
All content with label amazon+batching+client+cloud+docbook+gridfs+hotrod+import+infinispan+jsr-107+listener+transactionmanager+write_behind.
Related Labels:
podcast, expiration, publish, datagrid, coherence, interceptor, server, replication, dist, release, query, deadlock, intro, lock_striping, jbossas, nexus, guide, schema, cache,
s3, grid, memcached, test, jcache, api, xsd, ehcache, maven, documentation, youtube, ec2, 缓存, hibernate, aws, interface, custom_interceptor, clustering, setup, eviction, out_of_memory, concurrency, jboss_cache, index, events, batch, configuration, hash_function, buddy_replication, xa, write_through, remoting, mvcc, notification, tutorial, murmurhash2, presentation, jbosscache3x, xml, distribution, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, development, permission, websocket, transaction, async, interactive, xaresource, build, searchable, demo, cache_server, installation, scala, command-line, non-blocking, migration, filesystem, jpa, tx, gui_demo, eventing, shell, client_server, testng, murmurhash, infinispan_user_guide, standalone, webdav, snapshot, repeatable_read, docs, consistent_hash, store, jta, faq, 2lcache, lucene, jgroups, locking, rest, hot_rod
more »
( - amazon, - batching, - client, - cloud, - docbook, - gridfs, - hotrod, - import, - infinispan, - jsr-107, - listener, - transactionmanager, - write_behind )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/amazon+batching+client+cloud+docbook+gridfs+hotrod+import+infinispan+jsr-107+listener+transactionmanager+write_behind | 2019-12-06T03:12:00 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.jboss.org |
Wicked Code
Five Undiscovered Features on ASP.NET 2.0
Jeff Prosise
Code download available at:WickedCode0502.exe(121 KB)
Contents
Updating Browser Displays in (Almost) Real Time
Encrypted Configuration Sections
Auto-Culture Handling
Custom Expression Builders
Custom Web Events 2.0 offers myriad ways to do more with less code. And with Beta 2 just around the corner, now is the time to get serious about ASP.NET 2.0.
You may have read some of the many books and magazine articles previewing the upcoming features. You might even have seen a live demo at a conference or user group meeting. But how well do you really know ASP.NET 2.0? Did you know, for example, that those wonderful $ expressions used to declaratively load connection strings and other resources can be extended to create $ expressions of your own? Did you realize that the new ASP.NET 2.0 client callback manager provides an elegant solution to the problem of keeping browser displays in sync with constantly changing data on the server? Did you know that you can encrypt sections of Web.config to prevent connection strings and other potentially injurious data from being stored in plaintext?
Just underneath the surface of ASP.NET 2.0 lies a treasure trove of new features and capabilities that have received little coverage. This installment of Wicked Code presents five of them. All the code samples were tested against Beta 1; some may require modification for Beta 2. And as usual, remember things can change as these are beta versions.
Updating Browser Displays in (Almost) Real Time
In "An Overview of the New Services, Controls, and Features in ASP.NET 2.0" in the June 2004 issue of MSDN®Magazine, I wrote about the ASP.NET 2.0 new client callback manager and demonstrated how it can be used to transmit XML-HTTP callbacks to Web servers to convert ZIP codes into city names. Dino Esposito delved more deeply into XML-HTTP callbacks in his August 2004 Cutting Edge column ("Script Callbacks in ASP.NET").
XML-HTTP callbacks enable browsers to make calls to Web servers without performing full-blown postbacks. The benefits are numerous. XML-HTTP callbacks transmit less data over the wire, thereby using bandwidth more efficiently. XML-HTTP callbacks don't cause the page to flicker because they don't cause the browser to discard the page as postbacks do. Furthermore, XML-HTTP callbacks execute less code on the server because ASP.NET short-circuits the request so that it executes the minimum amount of code necessary. Inside an XML-HTTP callback, for example, a page's Render method isn't called, significantly reducing the time required to process the request on the server.
Once they learn about the ASP.NET 2.0 client callback manager, most developers can envision lots of different uses for it. But here's an application for XML-HTTP callbacks that you might not have thought of. I regularly receive e-mail from developers asking how to create two-way connections between browser clients and Web servers. The scenario generally involves an ASP.NET Web page displaying data that's continually updated on the server. The goal is to create a coupling between the browser and the Web server so that when the data changes on the server, it's automatically updated on the client, too.
It sounds reasonable enough, but transmitting asynchronous notifications from a Web server to a browser is a nontrivial problem. However, XML-HTTP callbacks provide a handy solution. Rather than try to contrive a mechanism for letting a Web server send notifications to a browser, you can have the browser poll the server in the background using efficient XML-HTTP callbacks. Unlike META REFRESH tags, an XML-HTTP solution causes no flashing in the browser, producing a superior user experience. And unlike more elaborate methods that rely on maintaining open ports, an XML-HTTP-based solution, properly implemented, has minimal impact on scalability.
The Web page shown in Figure 1 demonstrates how dynamic page updates using XML-HTTP callbacks work. To see for yourself, launch the page called DynamicUpdate.aspx in your browser. Then open the file named Stocks.xml in the Data directory. The fictitious stock prices displayed in DynamicUpdate.aspx come from Stocks.xml. Now change one of the stock prices in Stocks.xml and save your changes. After a brief pause (two or three seconds on average), the browser's display updates to reflect the change.
Figure 1** Dynamic Page Updates **
What's the magic that allowed the update to occur? XML-HTTP callbacks, of course. Figure 2 lists the source code for the codebehind class that serves DynamicUpdate.aspx. Page_Load uses the new ASP.NET 2.0 Page.GetCallbackEventReference method to obtain the name of a function it can call to initiate a callback. Then it registers its own __onCallbackCompleted function, which uses client-side script to update the client's display, to be called when a callback returns. Finally, it registers a block of startup script that uses window.setInterval to call the function that initiates callbacks—the function whose name was returned by GetCallbackEventReference. Once the page loads, it polls the server every five seconds for updated data. Any changes made to the data on the server appear in the browser after a short delay.
Figure 2 DynamicUpdate DynamicUpdate_aspx { static readonly string\n" + "function __onCallbackCompleted (result, context)\n" + "{{\n" + "var args = result.split (';');\n" + "var gridView = document.getElementById('{0}');\n" + "gridView.rows[1].cells[1].childNodes[0].nodeValue = args[0];\n" + "gridView.rows[2].cells[1].childNodes[0].nodeValue = args[1];\n" + "gridView.rows[3].cells[1].childNodes[0].nodeValue = args[2];\n" + "}}\n" + "</script>"; static readonly string\n" + "window.setInterval (\"{0}\", 5000);\n" + "</script>"; void Page_Load(object sender, EventArgs e) { // Get a callback event reference string cbref = GetCallbackEventReference (this, "null", "__onCallbackCompleted", "null", "null"); // Register a block of client-side script containing // __onCallbackCompleted RegisterClientScriptBlock ("MyScript", String.Format( _script1, GridView1.ClientID)); // Register a block of client-side script that launches // XML-HTTP callbacks at five-second intervals RegisterStartupScript ("StartupScript", String.Format( _script2, cbref)); } // Server-side callback event handler string ICallbackEventHandler.RaiseCallbackEvent(string arg) { // Read the XML file into a DataSet DataSet ds = new DataSet (); ds.ReadXml(Server.MapPath ("~/Data/Stocks.xml")); // Extract the stock prices from the DataSet string amzn = ds.Tables[0].Rows[0]["Price"].ToString (); string intc = ds.Tables[0].Rows[1]["Price"].ToString(); string msft = ds.Tables[0].Rows[2]["Price"].ToString(); // Return a string containing all three stock prices // (for example, "10.0;20.0;30.0") return (amzn + ";" + intc + ";" + msft); } }
Currently, DynamicUpdate.aspx.cs doesn't use bandwidth as efficiently as it could because every callback returns all three stock prices, even if the data hasn't changed. You could make DynamicUpdate.aspx.cs more efficient by modifying it to return only the data that has changed and to return nothing at all if no prices have changed. Then you'd have the best of both worlds: a scalable, lightweight mechanism for detecting updates on the server, and one that transmits only as much information as it must and not a single byte more. That's a win no matter how you look at it.
Encrypted Configuration Sections
ASP.NET 1.x texts frequently advise developers to put database connection strings in the <appSettings> section of Web.config. Doing so makes retrieving connection strings easy, and it centralizes the data so that changing a connection string in one place propagates the change throughout the application. Unfortunately, ASP.NET 1.x has no oblique support for encrypting connection strings (or any other data, for that matter) in Web.config. That leaves programmers with a Faustian choice: store connection strings in plaintext where they're vulnerable to hackers, or store them in encrypted form and write lots of code to decrypt them after you've already fetched them.
One of the tenets of writing secure ASP.NET code is to avoid storing as plaintext any secrets, passwords, connection strings, or other data that could be misused if divulged. To prevent such risky behavior, ASP.NET 2.0 lets you encrypt individual sections of Web.config. Encryption is transparent to the application. You don't have to do anything special to read an encrypted string from Web.config; you just read it as normal and if it's encrypted, it's automatically decrypted by ASP.NET. Not a single line of custom code is required. ASP.NET also offers you a choice of two encryption modes. One uses triple-DES encryption with a randomly generated key protected by RSA; the other encryption mode uses triple-DES encryption as implemented by the Windows® Data Protection API (DPAPI). You can add support for other encryption techniques by plugging in new data protection providers. Once encrypted, data stored in Web.config remains theoretically secure even if the Web server is compromised and the entire Web.config file falls into the wrong hands. Without the decryption key, the data can't be decoded.
Sometime before ASP.NET 2.0 ships, the ASP.NET page of the IIS Microsoft Management Console (MMC) snap-in will probably be upgraded with a GUI for encrypting and decrypting sections of Web.config. Beta 1 lacks such a tool. In fact, most ASP.NET developers I've talked to aren't even aware that Web.config supports encrypted configuration sections.
The good news is that you don't have to wait for tool support to take advantage of encrypted configuration sections. The new ASP.NET 2.0 configuration API has methods you can use to build a tool of your own. One call to ConfigurationSection.ProtectSection is sufficient to encrypt a configuration section; a subsequent call to ConfigurationSection.UnProtectSection decrypts it. Following a successful call to either method, you call Configuration.Update to write changes to disk. (Configuration.Update will probably be renamed to Configuration.Save in Beta 2.) Additionally, the aspnet_regiis.exe tool in Beta 1 provides some command-line support (see the -p* options).
The page in Figure 3 offers a simple UI for encrypting and decrypting the <connectionStrings> section of Web.config. If you'd like, you can modify it to support encryption of other configuration sections, too. To encrypt <connectionStrings>, simply click the Encrypt button. To decrypt, click Decrypt (tricky, huh?).
Figure 3 Encrypting and Decrypting <connectionStrings>
ProtectSection.aspx
<%@ Page <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <div> <asp:Button <asp:Button </div> </form> </body> </html>
ProtectSection ProtectSection_aspx { void Button1_Click(object sender, EventArgs e) { Configuration config = Configuration.GetWebConfiguration( Request.ApplicationPath); ConfigurationSection section = config.Sections["connectionStrings"]; section.ProtectSection ("DataProtectionConfigurationProvider"); config.Update (); } void Button2_Click(object sender, EventArgs e) { Configuration config = Configuration.GetWebConfiguration( Request.ApplicationPath); ConfigurationSection section = config.Sections["connectionStrings"]; section.UnProtectSection (); config.Update(); } }
Figure 4 shows the <connectionStrings> section of the accompanying Web.config file before and after encryption (note that the encrypted string is really more than a thousand characters long and has been excerpted here). You should also notice the <protectedData> section added to Web.config containing information needed to decrypt the connection strings. Significantly, <protectedData> doesn't contain the decryption key. When the Windows DPAPI is used to perform the encryption as it is here, the decryption key is autogenerated and locked away in the Windows Local Security Authority (LSA).
Figure 4 <connectionStrings> Before and After Encryption
connectionStrings Before Encryption
<connectionStrings> <add name="Pubs" connectionString=" Server=localhost;Integrated Security=True;Database=Pubs" providerName="System.Data.SqlClient" /> <add name="Northwind" connectionString="Server=localhost;Integrated Security=True;Database=Northwind" providerName="System.Data.SqlClient" /> </connectionStrings>
connectionStrings After Encryption
<connectionStrings> <EncryptedData> <CipherData> <CipherValue>AQAAANCMnd8BfdERjHoAw ...</CipherValue> </CipherData> </EncryptedData> </connectionStrings> <protectedData> <protectedDataSections> <add name="connectionStrings" provider="DataProtectionConfigurationProvider" /> </protectedDataSections> </protectedData>
ASP.NET 2.0 permits all but a handful of configuration sections to be encrypted. The <httpRuntime> section, for example, doesn't support encryption because it's accessed by the tiny fraction of ASP.NET that's built from unmanaged code. But everything that matters can be encrypted, and by using encrypted configuration sections judiciously, you can erect an additional barrier for hackers attempting to steal secrets from your site.
Auto-Culture Handling
Developers charged with the task of localizing Web sites in ASP.NET 1.x often found themselves writing code into Global.asax to inspect Accept-Language headers and attach CultureInfo objects representing language preferences to the threads that handle individual requests. The code they wrote to do that frequently took the form shown in Figure 5.
Figure 5 Attach Language Preference to Incoming Requests
void Application_BeginRequest (Object sender, EventArgs e) { try { if (Request.UserLanguages.Length > 0) { CultureInfo ci = CultureInfo.CreateSpecificCulture( Request.UserLanguages[0]); Thread.CurrentThread.CurrentCulture = ci; Thread.CurrentThread.CurrentUICulture = ci; } } catch (ArgumentException) { // Do nothing if CreateSpecificCulture fails } }
A new feature of ASP.NET called auto-culture handling obviates the need for such code. Auto-culture handling is enabled for individual pages by including element in Web.config. However you choose to enable it, auto-culture handling has an interesting effect: it maps Accept-Language headers to CultureInfo objects and attaches them to the current thread, just like the code in Figure 5.
To demonstrate, check out the page in Figure 6. Its output consists of a Calendar control and a text string showing today's date. The latter is generated by a call to DateTime.ToShortDateString. If a user who has configured her browser to transmit Accept-Language headers specifying French as her preferred language visits the page, she sees the page depicted in Figure 7. If auto-culture handling were not enabled, the page would appear no different to French users than it would to other users. The key is the @ Page directive turning auto-culture handling on. ASP.NET does the hard part; you do the rest.
Figure 6 Auto-Culture Handling
AutoCulture.aspx
<%@ Page <asp:Calendar</h2> </form> </body> </html>
AutoCulture.aspx.cs
using System; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.HtmlControls; public partial class AutoCulture_aspx { void Page_Load (object sender, EventArgs e) { Label1.Text = "Today's date is " + DateTime.Now.ToShortDateString(); } }
Figure 7** Auto-Culture Handling Enabled **
Obviously, there's more to localizing an entire Web site than simply enabling auto-culture handling. Auto-culture handling does not, for example, localize static Web site content. But ASP.NET 2.0 offers other new localization features as well, including the new <localize> tag for localizing static content and $ Resources expressions for loading localization resources declaratively. Put simply, ASP.NET 2.0 makes it dramatically easier than ASP.NET 1.x to provide localized content to international users.
Custom Expression Builders
ASP.NET 2.0 developers are encouraged to store database connection strings in the <connectionStrings> section of the registry. Connection strings stored that way can be loaded declaratively, as demonstrated by the following tag declaring a SqlDataSource:
<asp:SqlDataSource ...
"<%$...%>" is a new expression type in ASP.NET. It can also be used to load resources with statements like this one:
<asp:Literal
And it can be used to load strings from the <appSettings> configuration section, as shown here:
<asp:Literal
What is less widely known is that $ expressions are extensible. That is, you can add support for $ expressions of your own by writing custom expression builder classes. System.Web.Compilation.ExpressionBuilder in the Framework provides the basic plumbing needed and can be derived from in order to create custom expression builders.
Here is the source code for a simple page named Version.aspx:
<%@ Page </h1> </body> </html>
The page's output shows the version of ASP.NET that's running. The version number is generated by this expression:
<%$ Version:MajorMinor %>
On its own, ASP.NET has no idea what to do with this expression. It works because the application's Code directory contains the source code for a custom expression builder named VersionExpressionBuilder (shown in Figure 8). VersionExpressionBuilder derives from System.Web.Compilation.ExpressionBuilder and overrides one virtual method, GetCodeExpression, which is called at run time by ASP.NET to evaluate the $ Version expression.
Figure 8 VersionExpressionBuilder.cs
using System; using System.Web.UI; using System.Web.Compilation; using System.CodeDom; public class VersionExpressionBuilder : ExpressionBuilder { public override CodeExpression GetCodeExpression( BoundPropertyEntry entry, object parsedData, ExpressionBuilderContext context) { string param = entry.Expression; if (String.Compare(param, "All", true) == 0) return new CodePrimitiveExpression (String.Format("{0}.{1}.{2}.{3}", Environment.Version.Major, Environment.Version.Minor, Environment.Version.Build, Environment.Version.Revision)); else if (String.Compare(param, "MajorMinor", true) == 0) return new CodePrimitiveExpression (String.Format("{0}.{1}", Environment.Version.Major, Environment.Version.Minor)); else throw new InvalidOperationException ("Use $ Version:All or $ Version:MajorMinor"); } }
The Expression property of the BoundPropertyEntry parameter passed to GetCodeExpression contains the text to the right of the colon in the expression: in this particular case, "MajorMinor." GetCodeExpression responds by a CodePrimitiveExpression encapsulating the string "2.0". If you write the $ Version expression this way instead
<%$ Version:All %>
then the string returned contains build and revision numbers as well as major and minor version numbers.
Custom expression builders must be registered and mapped to expression prefixes so that ASP.NET knows what class to instantiate when it encounters a $ expression. Registration is accomplished by adding an <expressionBuilders> section to Web.config's <compilation> section, as shown here:
<!-- From Web.config --> <compilation> <expressionBuilders> <add expressionPrefix="Version" type="VersionExpressionBuilder"/> </expressionBuilders> </compilation>
Now that you know about custom expression builders, you can probably envision other uses for them. Imagine, for example, $ XPath expressions that extract data from XML files, or $ Password expressions that retrieve passwords or other secrets from ACLed registry keys. The possibilities are endless.
Custom Web Events
One of the major new services featured in ASP.NET 2.0 is the one provided by the health monitoring subsystem. With a few simple statements in Web.config, an ASP.NET 2.0 application can be configured to log failed logins, unhandled exceptions, expired forms authentication tickets, and more.
Logging is accomplished by mapping the "Web events" fired by the health monitoring subsystem to Web event providers. Each provider corresponds to a specific logging medium. For example, the built-in EventLogProvider logs Web events in the Windows event log, while SqlWebEventProvider logs them in a SQL Server™ database. Other providers supplied with ASP.NET 2.0 permit Web events to be transmitted in e-mail messages, redirected to the WMI subsystem, and even forwarded to registered trace listeners.
Out of the box, the health monitoring subsystem presents a world of possibilities for monitoring the health and well-being of running ASP.NET applications and for leaving paper trails for use in failure diagnostics. But what's really great about health monitoring is that it is entirely extensible. You can define custom Web events and fire them at appropriate junctures in an application's lifetime. Imagine that your application fired a Web event every time database contention resulted in a concurrency error. You could map these events to a provider and check the log at the end of each day to detect and correct excessive concurrency errors. Or what if a financial app fired a Web event every time it performed a monetary transaction? You could keep a running log of all such transactions simply by adding a few statements to Web.config.
The page shown in Figure 9 fires a custom Web event each time you click the "Fire Custom Web Event" button. The custom Web event is simple: it notifies any providers that are connected to it that the button was clicked.
Figure 9 Firing Custom Web Events
WebEvent.aspx
<%@ Page <asp:Button </form> </body> </html>
WebEvent.aspx.cs
using System; using System.Web; using System.Web.Management; public partial class WebEvent_aspx { void Button1_Click (object sender, EventArgs e) { MyWebEvent mwe = new MyWebEvent ("Click!", null, 100001, DateTime.Now); WebBaseEvent.Raise (mwe); } }
Figure 10 shows how the event appears in the Windows Event Viewer if the event, which is named simply "MyWebEvent," is mapped to the Windows event log.
Figure 10** Windows Event Viewer **
How do custom Web events work? You begin by defining a custom Web event class by deriving from System.Web.Management.WebBaseEvent, as shown in Figure 11.
Figure 11 MyWebEvent.cs
using System; using System.Web.Management; public class MyWebEvent : WebBaseEvent { DateTime _time; public MyWebEvent (string message, object source, int eventCode, DateTime time) : base (message, source, eventCode) { _time = time; } public override void FormatCustomEventDetails( WebEventFormatter formatter) { formatter.AppendLine ("Button clicked at " +_time.ToString()); } }
Derived classes typically override FormatCustomEventDetails, which gives them the opportunity to append output of their own to the output generated by the base class. The base class's output contains key statistics about the event, such as its name and the time and date it was fired. MyWebEvent adds a line of its own—"Button clicked at [date and time]"—that appears at the end of the log entry. After you've defined a custom Web event in this manner, you fire it by instantiating it and passing it to the static WebBaseEvent.Raise method, as seen in Figure 11.
You must register custom Web events before firing them. The following code shows the registration entries in Web.config for MyWebEvent:
<!-- From Web.config --> <healthMonitoring enabled="true"> <eventMappings> <add name="My Web Events" type="MyWebEvent, __code" /> </eventMappings> <rules> <add name="My Web Events" eventName="My Web Events" provider="EventLogProvider" /> </rules> </healthMonitoring>
The enabled="true" attribute in the <healthMonitoring> tag enables the health monitoring subsystem. The <eventMappings> section defines an event named "My Web Event" and maps it to the MyWebEvent class. Finally, the <rules> section maps MyWebEvent to EventLogProvider, directing instances of MyWebEvent to the Windows event log. If you wanted to log MyWebEvents in other storage media, you could do so simply by changing the provider specified in <rules>.
One nuance to be aware of regarding custom Web events is that in Beta 1, you need to add the source code file containing the custom Web event class to the application's Code directory and run the application once before registering the event and mapping it to a provider in Web.config. Otherwise, ASP.NET complains that the Web event class is undefined, apparently because ASP.NET parses Web.config before its autocompilation engine gets a chance to compile the files in the Code directory. I presume this will be fixed in Beta 2, but it's only a minor annoyance if you stage deployment so that autocompilation happens first.
ASP.NET 2.0 is loaded with new features designed to make building cutting-edge Web apps easier and less time consuming. But beyond the features you read about, a host of "lesser" features makes ASP.NET 2.0 more powerful and more extensible than its predecessors. Exploiting these hidden gems is one of the keys to writing great ASP.NET 2.0 code.
Send your questions and comments for Jeff to [email protected].
Jeff Prosise is a contributing editor to MSDN Magazine and the author of several books, including Programming Microsoft .NET (2002, Microsoft Press). He's also a cofounder of Wintellect, a software consulting and education firm that specializes in Microsoft .NET. | https://docs.microsoft.com/en-us/archive/msdn-magazine/2005/february/wicked-code-five-undiscovered-features-on-asp-net-2-0 | 2019-12-06T04:07:40 | CC-MAIN-2019-51 | 1575540484477.5 | [array(['images/cc163849.fig01.gif', 'Figure 1 Dynamic Page Updates'],
dtype=object)
array(['images/cc163849.fig07.gif',
'Figure 7 Auto-Culture Handling Enabled'], dtype=object)
array(['images/cc163849.fig10.gif', 'Figure 10 Windows Event Viewer'],
dtype=object) ] | docs.microsoft.com |
Support Report Email Submission
By providing your email address in the space above, a copy of the support report received by SoftNAS support will also be sent to you, to allow you to participate in the support process, and have on hand a frame of reference for a given solution or explanation.
Send Support Report
Click send to send your support logs to SoftNAS support.
Note: You can also generate a support report via command line, either through SoftNAS' Command Shell (accessed via General System Settings, and the Webmin Panel, and expanding Others) or by connecting to your instance via SSH, and running the following command:
su -l root -c "curl | php -- [email protected]" | https://docs.softnas.com/display/SD/Support+Tab | 2019-12-06T03:00:42 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.softnas.com |
Migrate XL Deploy data storage to an SQL database
When you upgrade to XL Deploy 8.0.x or higher from an earlier version, the data stored in XL Deploy must be converted from the JackRabbit (JCR) format to SQL format.
XL Deploy stores data and user-supplied artifacts such as scripts and deployment packages (
jar and
war files) in the database on the file system or on a database server. XL Deploy can use one of these options at any given time only, so you must configure the database correctly before using XL Deploy in a production setting. The setting can be configured in the
XL_DEPLOY_SERVER_HOME/conf/xl-deploy.conf file.
Database overview
By default, XL Deploy uses an internal database that stores data on the file system. This configuration is intended for temporary use and is not recommended for production use.
Database permissions
When you upgrade XL Deploy to a new version, XL Deploy creates and maintains the database schema. The database administrator requires full permissions on the database.
Table definitions
Table definitions in XL Deploy use limited column sizes. You must configure this for all supported databases to prevent these limits from restricting users in how they can use XL Deploy.
For example, the ID of a configuration item (CI) is a path-like structure that consists of the concatenation of the names of all parent folders for the CI. A restriction is set on the length of this combined structure. For most Relational Database Management Systems (RDBMSes), the maximum length is 2000. For MySQL and MS SQL Server, the maximum length is 1024.
Note: For MySQL, XL Deploy requires the Barracuda file format for InnoDB. This is the default format in MySQL 5.7 or later and can be configured in earlier versions.
Repository and reporting database connections
In the
XL_DEPLOY_SERVER_HOME/conf/xl-deploy.conf file, you can configure XL Deploy to use different database connections:
- One for primary XL Deploy data (under the
repositorykey)
- One for the task archive (under the
reportingkey)
The default configuration for the repository database connection is also used for the reporting connection.
Upgrade from XL Deploy 7.5.x to 8.0.x
If you are upgrading from XL Deploy version 7.5.x to 8.0.x, the task archive database is not migrated. Depending on your XL Deploy setup, modify the
XL_DEPLOY_SERVER_HOME/conf/xl-deploy.conf file to point to the existing task archive database in 7.5.x:
If the task archive on your XL Deploy 7.5.x instance is stored in an embedded database and you upgrade to version 8.0.x, in the
XL_DEPLOY_SERVER_HOME/conf/xl-deploy.conffile, make sure you specify the same embedded database in the
reportingkey. If you specify an external database server for the
reportingkey, the task archive from version 7.5.x will not be accessible after upgrading to 8.0.x.
If the task archive on your XL Deploy 7.5.x instance is stored in an external database server and you upgrade to version 8.0.x, in the
XL_DEPLOY_SERVER_HOME/conf/xl-deploy.conffile, make sure you specify the same external database server in the
reportingkey. If you specify a different external database server for the
reportingkey, the task archive from version 7.5.x will not be accessible after upgrading to 8.0.x.
Supported migration scenarios
This sections describes migration scenarios to versions 8.0.x and 8.1.x. For moving artifacts in later versions, see Move artifacts from the file system to a database.
Depending on your data storage configuration in XL Deploy pre 8.0.0 version, there are two supported migration scenarios:
The user-supplied artifacts are stored in a folder in the JCR repository. This is the default configuration. If you are trying to migrate to an SQL database, the structure must be maintained. The migrated artifacts must be stored on the file system.
- Configure the settings for the artifacts in the
XL_DEPLOY_SERVER_HOME/conf/xl-deploy.conffile.
- Specify the type of artifact storage to use. Use the specified file system location for storing artifacts -
file.
Set the location for artifact storage on the file system:
artifacts { type = "file" root = ${xl.repository.root}"/artifacts" }
The repository will be stored in an external SQL database.
XL Deploy has a custom configuration to store the JCR repository in an external database. If you want to migrate to an SQL database, the structure must be maintained. The migrated artifacts must be stored in an external SQL database.
- Configure the settings for the artifacts in the
XL_DEPLOY_SERVER_HOME/conf/xl-deploy.conffile.
- Specify the type of artifact storage to use as
db, and use the database for storing artifacts.
Set the location for the artifacts to be stored in the database.
artifacts { type = "db" }
Unsupported migration scenarios
The migration from a JCR setup where the artifacts are stored in a separate folder on the file system to a configuration where the artifacts are stored on an external SQL database server is not supported.
The migration to SQL from a custom setup where the artifacts are stored in the JCR repository on an external database to a configuration where the artifacts are stored in the file system is not supported.
Requirements
To migrate XL Deploy data storage to an SQL database, you must:
- Configure a database connection in the
XL_DEPLOY_SERVER_HOME/conf/xl-deploy.conffile. A sample file is available at
XL_DEPLOY_SERVER_HOME/conf/xl-deploy.conf.example. If you do not provide a custom database configuration in
xl-deploy.conf, XL Deploy uses the default configuration.
- Place a JDBC
jarlibrary that is compatible with your selected database in the
XL_DEPLOY_SERVER_HOME/libfolder.
Upgrade to XL Deploy 8.0.0 or later
The upgrade process has two stages:
During the upgrade, basic data (metadata, security related data, and CI data) is migrated to SQL format. This stage must be completed successfully before you can use XL Deploy.
Note: As of version 8.0.1, the migration process can be restarted during this first stage. If the process stops due to any issue, the migration can be restarted and it will continue from where it stopped. You are not required to perform a manual clean up of the partially migrated data. For example, when a database error occurs because a property value could not be written, the migration does not fail. The failed property is logged and you can manually handle the value later.
During normal operation of XL Deploy, CI change history data, which is primary used to compare CIs, is migrated to the SQL format. This operation is executed slowly, in small batches, to minimize the impact on the performance of XL Deploy. During the migration, CI change history data will become available to the system. Functionality that relies on CI change history data will not be able to access that data until the migration is complete, all other functionality will operate normally.
Upgrade instructions
The upgrade process first applies required upgraders to the JCR repository, and then migrates data from JCR to SQL format. If you have not already migrated archived tasks as part of the upgrade to XL Deploy 7.5.x, that migration will also run during the upgrade process.
Important: Before you upgrade to a new version of XL Deploy, create a backup of your repository. For more information, see Back up XL Deploy.
To perform the upgrade:
Follow the normal upgrade procedure as far as step 13. Do not start the XL Deploy server.
Download the XL Deploy JCR-to-SQL Migrator (
xl-deploy-x.x.x-jcr-to-sql-migrator.zip) from the XebiaLabs Software Distribution site. Note: Login to the XebiaLabs Software Distribution site requires customer login.
Extract the Migrator ZIP file into the XL Deploy installation folder, so that its directories are merged with those of the XL Deploy installation. For example:
cd xl-deploy-8.0.0-server unzip ~/Downloads/xl-deploy-8.0.0-jcr-to-sql-migrator.zip
Ensure that the settings used to configure the JCR repository in the previous installation are copied to the new installation, and that there are no changes in how JCR is configured. After the data is completely migrated the JCR configuration will no longer be used.
Ensure that the database connections are configured correctly in
XL_DEPLOY_SERVER_HOME/conf/xl-deploy.conf. We recommended that you use a different database or user, possibly on the same database instance, for the new SQL storage.
Follow the normal upgrade procedure, as described in upgrade procedure, from step 14.
Configuration options
You can add these options to your configuration file:
xl.migration.ci.stateFile = xl.migration.ci.errorLogLocation =
Use the
stateFile property to specify a file where to record the state of the migration. The process must have write access to the file. The file size depends on the size of the repository that is being migrated. The size will be approximately 80 bytes per CI that is migrated.
The
errorLogLocation property must be a folder where the process can write data on CIs that were not be migrated successfully. The process must have write access to the folder. It will attempt to create the folder if it does not exist.
Monitoring progress during stage 1
During the migration, progress will be reported on the command line and in the log file. This is an example of the command-line logging from stage 1:
... 2018-03-12 10:26:10.576 [main] {} WARN c.x.deployit.upgrade.Upgrader - Ensure that you're running in 'interactive' mode and look at your console to continue the upgrade process. 2018-03-12 10:26:10.576 [main] {} INFO c.x.deployit.upgrade.Upgrader - Upgraders need to be run, asking user to confirm. *** WARNING *** We detected that we need to upgrade your repository Before continuing we suggest you backup your repository in case the upgrade fails. Please ensure you have 'INFO' level logging configured. Please enter 'yes' if you want to continue [no]: yes 2018-03-12 10:26:14.569 [main] {} INFO c.x.deployit.upgrade.Upgrader - User response was: yes 2018-03-12 10:26:14.572 [main] {} INFO c.x.deployit.upgrade.Upgrader - Upgrading to version [deployit 7.5.0] 2018-03-12 10:26:14.706 [main] {} INFO c.x.deployit.upgrade.Upgrader - Upgrading to version [deployit 8.0.0] 2018-03-12 10:26:14.707 [main] {} INFO c.x.d.c.u.Deployit800CiMigrationUpgrader - Found migration of CI data. Asking user confirmation. *** WARNING *** This upgrade will migrate the CI data from the previous repository format (JCR) to the new repository format (SQL). The target database is configured to be jdbc:derby:repository/db;create=true. Some parts of this database (TableName(XLD_CIS) and TableName(XL_USERS)) tables will be cleared before migration. Please enter 'yes' if you want to continue [no]: yes 2018-03-12 10:26:18.298 [main] {} INFO c.x.d.migration.JcrToSqlMigrator - Starting CI Migration. 2018-03-12 10:26:18.298 [main] {} INFO c.x.d.migration.JcrToSqlMigrator - Deleting CIs and Users. 2018-03-12 10:26:18.455 [main] {} INFO c.x.d.migration.JcrToSqlMigrator - Migrating metadata. 2018-03-12 10:26:18.458 [main] {} INFO c.x.d.migration.JcrToSqlMigrator - Migrating version info. 2018-03-12 10:26:18.521 [main] {} INFO c.x.d.migration.JcrToSqlMigrator - Migrating security info. 2018-03-12 10:26:18.803 [main] {} INFO c.x.d.migration.JcrToSqlMigrator - Migrating CIs. 2018-03-12 10:26:19.624 [main] {} INFO c.x.d.migration.CiJcrToSqlMigrator - Running migration phase: create CI records on 2309 nodes in batches of 1000. 2018-03-12 10:26:20.115 [main] {} INFO c.x.d.migration.CiJcrToSqlMigrator - Completed batch 1 / total: 3. 2018-03-12 10:26:20.413 [main] {} INFO c.x.d.migration.CiJcrToSqlMigrator - Completed batch 2 / total: 3. 2018-03-12 10:26:20.565 [main] {} INFO c.x.d.migration.CiJcrToSqlMigrator - Completed batch 3 / total: 3. 2018-03-12 10:26:20.689 [main] {} INFO c.x.d.migration.CiJcrToSqlMigrator - Running migration phase: process properties on 2309 nodes in batches of 1000. 2018-03-12 10:26:20.914 [main] {} INFO c.x.deployit.util.JavaCryptoUtils - BouncyCastle already registered as a JCE provider 2018-03-12 10:26:22.424 [main] {} INFO c.x.d.migration.CiJcrToSqlMigrator - Completed batch 1 / total: 3. 2018-03-12 10:26:23.603 [main] {} INFO c.x.d.migration.CiJcrToSqlMigrator - Completed batch 2 / total: 3. 2018-03-12 10:26:24.003 [main] {} INFO c.x.d.migration.CiJcrToSqlMigrator - Completed batch 3 / total: 3. 2018-03-12 10:26:24.068 [main] {} INFO c.x.d.migration.MoveFilesInterceptor - Migrating artifacts from file system. Moving from /Users/mwinkels/8.0.0/xl-deploy-8.0.0-SNAPSHOT-server/repository/repository/datastore to repository/repository/datastore. 2018-03-12 10:26:24.083 [main] {} INFO c.x.d.migration.MoveFilesInterceptor - Migrated artifacts. 2018-03-12 10:26:24.083 [main] {} INFO c.x.d.migration.JcrToSqlMigrator - Done. 2018-03-12 10:26:27.332 [task-sys-akka.actor.default-dispatcher-3] {} INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started ...
The log file will contain more detailed logging, such as logging for data that is based on types that are no longer available in the type system.
Monitoring progress during stage 2
During stage 2, you can monitor progress using a JMX client such as JConsole, JVisualVM, or JmxTerm (a command-line JMX client).
The following JMX beans are available:
com.xebialabs.xldeploy.migration:name=HistoryMigrationStatistics: This bean can be used to track the progress on the migration of change history. The properties on the bean show 4 different counts for the items to migrate:
- ToProcess: The number of items remaining to process.
- Processed: The number of items successfully processed.
- InError: The number of items that failed during migration. More detailed information on the failure can be found in the log file.
- Ignored: The number of items that were ignored. Items under the Applications root in the system or that are of a type for which versioning is switched off will be ignored.
The process is complete when the number of items in ToProcess reaches 0.
com.xebialabs.xldeploy.migration:name=HistoryMigrationManager: This bean can be used to manage the process. The reset operations will reset the migration flag on the items, including all items or only the items in InError, in the JCR repository, to make them eligible for re-processing. The restart migration operation will restart the process. These operations can only be applied when the migration process is not running.
com.xebialabs.xldeploy.migration:name=ArchiveMigrationStatistics: This bean shows counts on the migration status of items in the task archive. The process is complete when the number of items in ToProcess and the number of items in Migrated reach 0.
Removing the Migrator after database migration
After migration is complete you can remove the XL Deploy JCR-to-SQL Migrator from the server.
To remove it:
- Shut down XL Deploy.
- Run the
XL_DEPLOY_SERVER_HOME/bin/uninstall-jcr-to-sql-migrator.shor
XL_DEPLOY_SERVER_HOME/bin/uninstall-jcr-to-sql-migrator.cmdscript. This will remove the Migrator.
- Restart XL Deploy.
If the system does not start correctly at this stage, contact XebiaLabs Support. The issue may be caused by a plugin that depends on the JCR packages. You can add these packages to the server by reinstalling the Migrator. The server will start, but it is likely that the plugin that caused the issue will not work correctly.
Removing data after migration
Remove configuration files
- Remove the files in the configured
errorLogLocationif they exist. Carefully inspect the files to see if there were any problems during migration that should be fixed manually.
- Remove the migration state file:
migration.dat.
Remove the remaining JCR data
After you remove the XL Deploy JCR-to-SQL Migrator, some data will remain in JCR format. XL Deploy does not use this data. You can rename all resources and start XL Deploy to test that it works before removing the resources.
Depending on the configuration of your server, you can remove this data using different methods:
- If you were using an internal database, you can delete the JCR repository from the file system. In the default XL Deploy configuration, this includes everything in the
XL_DEPLOY_SERVER_HOME/repositorydirectory except the
databaseand
artifactsfolders.
- If you were using an external database, you can completely remove this database. You can drop the JCR tables, depending on your RDBMS and configuration. If the new system is using the same schema, make sure you do not drop the tables of the new SQL implementation. These tables are:
DATABASECHANGELOG,
DATABASECHANGELOGLOCK,
PERSISTENT_LOGINSand all tables starting with
XL_or
XLD_.
- If the artifacts were stored on disk in the JCR implementation, these files will be moved or copied to the new location during the migration. The old files, if they exist, can be removed.
It is not required that you delete the data after removing the migrator software. It will not impact the performance of XL Deploy. | https://docs.xebialabs.com/v.9.0/xl-deploy/how-to/migrate-xl-deploy-data-storage-to-an-sql-database/ | 2019-12-06T04:17:13 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.xebialabs.com |
This is a general overview for a deep dive into how to create and update articles click the link below: docs.acquire.io/creating-and-updating-knowledge-base-articles
Here you can create your own Knowledge Base articles using the toolbar below.
The toolbar has multiple features to customize your articles. Articles that are created can be drafts initially and once approved can be published to your knowledge base.
Using the gear icon next to Add new article, You can manage and add categories. Articles can easily be added and removed from categories. This is useful if you have a specific product that you feel deserves its own section in your Knowledge Database.
We provide a pre-defined format to use out of the box. However, you are able to customize all menus and basic functionality to your needs. You can also add links to the footer and also make additional changes using the APIs.
Everything can be customized except the Default help center URL which is generated by the system. Using this URL, your visitors can visit your help docs. However, you can also create a custom knowledge base domain.
You can also add some new menu items including the “Top Button Text” and “Get start Button”. By clicking "Add Field", a new menu will be created.
You can also Customize How it looks by changing the below-predefined settings.
You can also add your copyright, Privacy policy, Terms and conditions URLs at the footer.
Apart from this, if you want to customize your own Knowledgebase setup and make highly customizable changes like button position, color and general appearance you can use our APIs and also developers section of knowledge base using the link below:
Signup today or request a demoGet Started Now Book a demo | https://docs.acquire.io/knowledge-base-setup-and-articles | 2019-12-06T03:40:02 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.acquire.io |
Configuring the Start menu
Use the following steps to configure the Start menu options.
To configure the Start menu
- Press Ctrl+Esc on your keyboard to open the Start menu.
- Type S to open the Settings menu, and press Enter.
The Settings submenu options are displayed.
- Type C for Control Panel.
- Press the T until your hear the words "taskbar and start menu," and then press Enter.
The Taskbar and Start menu properties dialog box appears.
- On the Taskbar tab, perform the following actions:
- If the Group similar taskbar buttons option is selected, press Alt+G to deselect it.
- If the Hide inactive icons options is selected, press Alt+H to deselect it.
- Press Ctrl+Tab to select the Start Menu tab.
- Press Alt+M to select the Classic Start menu option.
- Press Tab until you hear "OK", and press Enter to save the changes.
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/itsm91/configuring-the-start-menu-608491195.html | 2019-12-06T04:33:29 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.bmc.com |
Here you can find some information on how to get the most out of Marvin JS User's Guide.
This user documentation consists of three parts:
In the Editor Overview section you Drawing and editing options part includes all the information about the ways of drawing in Marvin JS.
This section is for you if you would like to know how you can create, change or delete your structures, texts, graphical objects in Marvin JS.
In the Feature overview pages we have collected all the functionalities which might be connected with a special workflow. (For example, the Query Structures in Marvin JS includes all the Marvin JS functionalities which might be useful for querying, etc.)
This section is for you if you are interested in whether Marvin JS has functionalities related to some special activities (e.g. handling Reactions, Query features, Stereochemistry, Markush structres). | https://docs.chemaxon.com/pages/diffpages.action?originalId=6226793&pageId=6226811 | 2019-12-06T03:38:24 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.chemaxon.com |
DirectAccess Offline Domain Join
Applies To: Windows Server (Semi-Annual Channel), Windows Server 2016
This guide explains the steps to perform an offline domain join with DirectAccess. During an offline domain join, a computer is configured to join a domain without physical or VPN connection.
This guide includes the following sections:
Offline domain join overview
Requirements for offline domain join
Offline domain join process
Steps for performing an offline domain join
Offline domain join overview
Introduced in Windows Server 2008 R2, domain controllers include a feature called Offline Domain Join. A command line utility named Djoin.exe lets you join a computer to a domain without physically contacting a domain controller while completing the domain join operation. The general steps for using Djoin.exe are:
Run djoin /provision to create the computer account metadata. The output of this command is a .txt file that includes a base-64 encoded blob.
Run djoin /requestODJ to insert the computer account metadata from the .txt file into the Windows directory of the destination computer.
Reboot the destination computer, and the computer will be joined to the domain..
A domain join creates a computer account and establishes a trust relationship between a computer running a Windows operating system and an Active Directory domain.
Prepare for offline domain join
Create the machine account.
Inventory the membership of all security groups to which the machine account belongs.
Gather the required computer certificates, group policies, and group policy objects to be applied to the new client(s).
. The following sections explain operating system requirements and credential requirements for performing a DirectAccess offline domain join using Djoin.exe.
Operating system requirements
You can run Djoin.exe for DirectAccess only on computers that run Windows Server 2016, Windows Server 2012 or Windows 8. The computer on which you run Djoin.exe to provision computer account data into AD DS must be running Windows Server 2016, Windows 10, Windows Server 2012 or Windows 8. The computer that you want to join to the domain must also be running Windows Server 2016, Windows 10, Windows Server 2012 , or Windows 8..
Offline domain join process
Run Djoin.exe at an elevated command prompt to provision the computer account metadata. When you run the provisioning command, the computer account metadata is created in a binary file that you specify as part of the command. ().
Steps for performing a DirectAccess offline domain join
The offline domain join process includes the following steps:
Create a new computer account for each of the remote clients and generate a provisioning package using the Djoin.exe command from an already domain joined computer in the corporate network.
Add the client computer to the DirectAccessClients security group
Transfer the provisioning package securely to the remote computers(s) that will be joining the domain.
Apply the provisioning package and join the client to the domain.
Reboot the client to complete the domain join and establish connectivity.
There are two options to consider when creating the provisioning packet for the client. If you used the Getting Started Wizard to install DirectAccess without PKI, then you should use option 1 below. If you used the Advanced Setup Wizard to install DirectAccess with PKI, then you should use option 2 below.
Complete the following steps to perform the offline domain join:
Option1: Create a provisioning package for the client without PKI
At a command prompt of your Remote Access server, type the following command to provision the computer account:
Djoin /provision /domain <your domain name> /machine <remote machine name> /policynames DA Client GPO name /rootcacerts /savefile c:\files\provision.txt /reuse
Option2: Create a provisioning package for the client with PKI
At a command prompt of your Remote Access server, type the following command to provision the computer account:
Djoin /provision /machine <remote machine name> /domain <Your Domain name> /policynames <DA Client GPO name> /certtemplate <Name of client computer cert template> /savefile c:\files\provision.txt /reuse
Add the client computer to the DirectAccessClients security group
On your Domain Controller, from Start screen, type Active and select Active Directory Users and Computers from Apps screen.
Expand the tree under your domain, and select the Users container.
In the details pane, right-click DirectAccessClients, and click Properties.
On the Members tab, click Add.
Click Object Types, select Computers, and then click OK.
Type the client name to add, and then click OK.
Click OK to close the DirectAccessClients Properties dialog, and then close Active Directory Users and Computers.
Copy and then apply the provisioning package to the client computer
Copy the provisioning package from c:\files\provision.txt on the Remote Access Server, where it was saved, to c:\provision\provision.txt on the client computer.
On the client computer, open an elevated command prompt, and then type the following command to request the domain join:
Djoin /requestodj /loadfile C:\provision\provision.txt /windowspath %windir% /localos
Reboot the client computer. The computer will be joined to the domain. Following the reboot, the client will be joined to the domain and have connectivity to the corporate network with DirectAccess.
See Also
NetProvisionComputerAccount Function
NetRequestOfflineDomainJoin Function
Feedback | https://docs.microsoft.com/en-us/windows-server/remote/remote-access/directaccess/directaccess-offline-domain-join | 2019-12-06T03:15:48 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.microsoft.com |
Managing deleted files¶
When you delete a file in Next Nextcloud Web interface. You’ll have options to either restore or permanently delete files.
Quotas¶
Deleted files are not counted against your storage quota. Only your personal files count against your quota, not files which were shared with you. 30. | https://docs.nextcloud.com/server/15/user_manual/files/deleted_file_management.html | 2019-12-06T03:14:16 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.nextcloud.com |
Self-sign certificates for Splunk Web
This topic provides basic examples for creating the self-signed certificates in the command line using the version of OpenSSL included with Splunk software..
Self-signed certificates are best.
Before you begin
In this discussion,
$SPLUNK_HOME refers to the Splunk installation directory.
- For Windows, the default installation directory is
C:\Program Files\splunk.
- For most *nix platforms, the default installation directory is
/opt/splunk.
- For Mac OS, the default installation directory is
/Applications/splunk.
See the Administration Guide to learn more about working with Windows and *nix.
Generate a new root certificate to be your Certificate Authority
1. Create a new directory to host your certificates and keys. For this example we will use
$SPLUNK_HOME/etc/auth/mycerts.
We recommend that you place your new certificates in a different directory than
$SPLUNK_HOME/etc/auth/splunkweb so that you don't overwrite the existing certificates. This ensures that you are able to use the certificates that ship with Splunk software in
$SPLUNK_HOME/etc/auth/splunkweb for other Splunk components as necessary.
Note: If you created a self-signed certificate as described in How to self-sign certificates, you can copy that root certificate into your directory and skip to the next step: Create a new private key for Splunk Web.
2. Generate a new RSA private key. Splunk Web supports 2048 bit keys, but you can specify larger keys if they are supported by your browser.
$SPLUNK_HOME/bin/splunk cmd openssl genrsa -des3 -out myCAPrivateKey.key 2048
Note that in Windows you may need to append the location of the
openssl.cnf file:
$SPLUNK_HOME\bin\splunk cmd openssl genrsa -des3 -out myCAPrivateKey.key 2048
Splunk Web supports 2048 bit keys, but you can specify larger keys if they are supported by your browser.
3. When prompted, create a password.
The private key
myCAPrivateKey.key appears in your directory. This is your root certificate private key.
4. Generate a certificate signing request using the root certificate private key
myCAPrivateKey.key:
5. Provide the password to the private key
myCAPrivateKey.key.
A new CSR
myCACertificate.csr appears in your directory.
6. Use the CSR to generate a new root certificate and sign it with your private key:
In *nix:
$SPLUNK_HOME/bin/splunk cmd openssl x509 -req -in myCACertificate.csr -signkey myCAPrivateKey.key -out myCACertificate.pem -days 3650
In Windows:
>$SPLUNK_HOME\bin\splunk cmd openssl x509 -req -in myCACertificate.csr -signkey myCAPrivateKey.key -out myCACertificate.pem -days 3650
7. When prompted, provide for the password to the private key
myCAPrivateKey.key.
A new certificate
myCACertificate.pem appears in your directory. This is your public certificate.
Create a new private key for Splunk Web
1. Generate a new private key:
In *nix:
$SPLUNK_HOME/bin/splunk cmd openssl genrsa -des3 -out mySplunkWebPrivateKey.key 2048
In Windows:
$SPLUNK_HOME\bin\splunk cmd openssl genrsa -des3 -out mySplunkWebPrivateKey.key 2048 -config
2. When prompted, create a password.
A new key,
mySplunkWebPrivateKey.key appears in your directory.
3. Remove the password from your key. (Splunk Web does not verify that your password was removed with and sign a
The CSR
mySplunkWebCert.csr appears in your directory.
2. Self-sign the CSR with the root certificate private key
myCAPrivateKey.key:
In *nix:
In Windows:
3. When prompted, provide the password to the root certificate private key
myCAPrivateKey.key.
The certificate
mySplunkWebCert.pem is added to your directory. This is your server certificate.
Create a single PEM file
Combine your server certificate and public certificates, in that order, into a single PEM file.
Here's an example of how to do this in Linux:
# cat mySplunkWebCert.pem myCACertificate.pem > mySplunkWebCertificate.pem
Here's an example in Windows:
# type mySplunkWebCert.pem myCACertificate.pem > mySplunkWebCertificate.pem
Set up certificate chains
To use multiple certificates, append the intermediate certificate to the end of the server's certificate file in the following order:
<div class=samplecode
[ server certificate] [ intermediate certificate] [ root certificate (if required) ]
So for example, a certificate chain might look like this:
-----BEGIN CERTIFICATE----- ... (certificate for your server)... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... (the intermediate certificate)... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... (the root certificate for the CA)... -----END CERTIFICATE-----
Next steps
Now that you have your certificates, you need to distribute them and configure Splunkd and Splunk Web to use them. See Secure Splunk Web with your own certificate in this manual
@Graether: both is true - the certificate created is the root certificate for your private CA and it's also public.
Should
6. Use the CSR to generate a new root certificate and sign it with your private key:
not read
6. Use the CSR to generate a new public certificate and sign it with your private key:
?
Hey Landen99:
These are all really good questions. The general answer is that these instructions are one very simple path, which people with little certificate experience can use to create a simple certificate that will work with Splunk.
There are so many different options, methods, etc. for creating certificates. We wanted to avoid teaching SSL and/or the different approaches as much as possible and instead focus on one simple happy path.
We assume that people working with more complex SSL methods will not need instructions for creating a certificate. I am working on creating some new documentation that will help the advanced or intermediate user more specifically and will take your feedback into account.
Thanks so much for the feedback!
Cheers,
jen
Hi rturk: Good question. We are assuming the possibility that the user might perform one task but not the other. We are also assuming that these tasks are for people not familiar with certificates, so we try to be linear and not include too many shortcuts. I am working on some new topics that will provide more options for experienced users.
Pmeyerson: Yeah that is awkward wording. i've fixed it. Thanks for the tip!
N8lawrence: Indeed it is not the case here. Splunk Web certs work differently than server and forwarder certificates. It's a quirk that honestly I'm not sure I could fully explain other than "that is how we set it up to work." :)
Can we update these instructions to include SAN for current Chrome browser requirements?
-config san.cnf
Why does the Windows command have -config without a file referenced?
Can we include the -subj option where country, city, state, etc can be specified at the CLI?
Why create the key with a password and then remove the password, instead of just using the -nodes option?
The section for creating server certificates asks you to include the server private key in the concatenated pem file - is that deliberately not the case here?
To create a single PEM file in windows you can follow these instructions:
type mysplunkwebcert.pem mycacertificate.pem > mysplunkwebcertificate.pem ... took me a bit to figure out what you meant by this as this is not something I typically have to do.
Thanks.
RE: "Generate a new root certificate to be your Certificate Authority", is there any reason you couldn't or wouldn't want to re-use the root certificate made as part of earlier instructions (i.e. myCAPrivateKey.key). Just thinking in terms of consistency of documentation - Cheers :-)
@Landen99 (ok, it's from June 30, 2017 - but for reference)
`openssl genrsa` does not have an option `-nodes`, this is only available for the `rsa` and `pkcs12` commands. But you can just leave out `-des3` and no password is asked for and no encryption is done. | https://docs.splunk.com/Documentation/Splunk/7.2.4/Security/Self-signcertificatesforSplunkWeb | 2019-12-06T04:04:17 | CC-MAIN-2019-51 | 1575540484477.5 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Graphics device shader capability level (Read Only).
This is approximate "shader capability" level of the graphics device, expressed in DirectX shader model terms.
Possible values are:
50 Shader Model 5.0 (DX11.0)
46 OpenGL 4.1 capabilities (Shader Model 4.0 + tessellation)
45 Metal / OpenGL ES 3.1 capabilities (Shader Model 3.5 + compute shaders)
40 Shader Model 4.0 (DX10.0)
35 OpenGL ES 3.0 capabilities (Shader Model 3.0 + integers, texture arrays, instancing)
30 Shader Model 3.0
25 Shader Model 2.5 (DX11 feature level 9.3 feature set)
20 Shader Model 2.0.
See Also: shader compilation targets.
#pragma strict function Start() { // Check for shader model 4.5 or better support if (SystemInfo.graphicsShaderLevel >= 45) print("Woohoo, decent shaders supported!"); }
using UnityEngine;
public class ExampleClass : MonoBehaviour { void Start() { // Check for shader model 4.5 or better support if (SystemInfo.graphicsShaderLevel >= 45) print("Woohoo, decent shaders supported!"); } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/2017.1/Documentation/ScriptReference/SystemInfo-graphicsShaderLevel.html | 2019-12-06T04:33:13 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.unity3d.com |
Update WinPE 5.0 to WinPE 5.1
Applies To: Windows 8.1, Windows Server 2012 R2
Update Windows PE (WinPE) to version 5.1 to support Windows 8.1 Update and Windows image file boot (WIMBoot) features.
Get the Windows 8.1 Update version of the Windows ADK
- Install the Windows 8.1 Update edition of the Windows ADK.
Download the Windows 8.1 Update packages your Windows PE 5.0 image:
copype amd64 C:\WinPE_amd64
Update your Windows PE image
- Add languages, drivers, packages (optional components) and other customizations. For more info, see WinPE: Mount and Customize, WinPE: Add drivers, WinPE: Add packages (Optional Components Reference).
Add the Windows 8.1 updates to the image
Mount the Windows PE image.
Dism /Mount-Image /ImageFile:"C:\WinPE_amd64\media\sources\boot.wim" /index:1 /MountDir:"C:\WinPE_amd64\mount":
Dism /image:c:\WinPE_amd64\mount /Cleanup-Image /StartComponentCleanup /ResetBase
Unmount the Windows PE image.
Dism /Unmount-Image /MountDir:"C:\WinPE_amd64\mount" /commit
Recommended: Export the image
Export and convert the Windows PE image to a new Windows image file. To reduce the final image size, we recommend performing this step last, so that DISM can remove several superseded files.
Dism /Export-Image /SourceImageFile:C:\WinPE_amd64\media\sources\boot.wim /SourceIndex:1 /DestinationImageFile:C:\WinPE_amd64\media\sources\boot2.wim
Replace the boot.wim file with the new boot2.wim file.
del C:\WinPE_amd64\media\sources\boot.wim rename C:\WinPE_amd64\media\sources\boot2.wim boot.wim
Create media
Create bootable media, such as a USB flash drive.
MakeWinPEMedia /UFD C:\WinPE_amd64 F:
Adding languages after adding Windows 8.1 Update
Tasks
Create WIMBoot Images
WinPE: Create USB Bootable drive | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-8.1-and-8/dn613859%28v%3Dwin.10%29 | 2019-12-06T03:47:20 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.microsoft.com |
public class TransactionSynchronizationManager extends Object
Supports one resource per key without overwriting, that is, a resource needs to be removed before a new one can be set for the same key. Supports a list of transaction synchronizations if synchronization is active.
Resource management code should check for context-bound resources, e.g.
database connections, via
getResource. Such code is normally not
supposed to bind resources to units of work, as this is the responsibilityReactiveTransactionManager,
and thus by all standard Spring transaction managers.
Resource management code should only register synchronizations when this
manager is active, which can be checked via
isSynchronizationActive();
it should perform immediate resource cleanup else. If transaction synchronization
isn't active, there is either no current transaction, or the transaction manager
doesn't support transaction synchronization.
Synchronization is for example used to always return the same resources within a transaction, e.g. a database connection for any given connection factory.
isSynchronizationActive(),
registerSynchronization(org.springframework.transaction.reactive.TransactionSynchronization),
TransactionSynchronization
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public TransactionSynchronizationManager(TransactionContext transactionContext)
public static reactor.core.publisher.Mono<TransactionSynchronizationManager> forCurrentTransaction()
TransactionSynchronizationManagerthat is associated with the current transaction context.
Mainly intended for code that wants to bind resources or synchronizations.
NoTransactionException- if the transaction info cannot be found — for example, because the method was invoked outside a managed transaction
public boolean hasResource(Object key)
key- the key to check (usually the resource factory)
@Nullable public Object getResource(Object key)
key- the key to check (usually the resource factory)
nullif none
public void bindResource(Object key, Object value) throws IllegalStateException
key- the key to bind the value to (usually the resource factory)
value- the value to bind (usually the active resource object)
IllegalStateException- if there is already a value bound to the context
public Object unbindResource(Object key) throws IllegalStateException
key- the key to unbind (usually the resource factory)
IllegalStateException- if there is no value bound to the context
@Nullable public Object unbindResourceIfPossible(Object key)
key- the key to unbind (usually the resource factory)
nullif none bound
public boolean isSynchronizationActive()
registerSynchronization(org.springframework.transaction.reactive.TransactionSynchronization)
public void initSynchronization() throws IllegalStateException
IllegalStateException- if synchronization is already active
public void registerSynchronization(TransactionSynchronization synchronization) throws IllegalStateException
Note that synchronizations can implement the
Ordered interface.
They will be executed in an order according to their order value (if any).
synchronization- the synchronization object to register
IllegalStateException- if transaction synchronization is not active
Ordered
public List<TransactionSynchronization> getSynchronizations() throws IllegalStateException
IllegalStateException- if synchronization is not active
TransactionSynchronization
public void clearSynchronization() throws IllegalStateException
IllegalStateException- if synchronization is not active
public void setCurrentTransactionName(@Nullable String name)
name- the name of the transaction, or
nullto reset it
TransactionDefinition.getName()
@Nullable public String getCurrentTransactionName()
nullif none set. To be called by resource management code for optimizations per use case, for example to optimize fetch strategies for specific named transactions.
TransactionDefinition.getName()
public void setCurrentTransactionReadOnly(boolean readOnly)
readOnly-
trueto mark the current transaction as read-only;
falseto reset such a read-only marker
TransactionDefinition.isReadOnly()
public boolean isCurrentTransactionReadOnly()
Note that transaction synchronizations receive the read-only flag
as argument for the
beforeCommit callback, to be able
to suppress change detection on commit. The present method is meant
to be used for earlier read-only checks.
TransactionDefinition.isReadOnly(),
TransactionSynchronization.beforeCommit(boolean)
public void setCurrentTransactionIsolationLevel(@Nullable Integer isolationLevel)
isolationLevel- the isolation level to expose, according to the R2DBC Connection constants (equivalent to the corresponding Spring TransactionDefinition constants), or
nullto reset it
TransactionDefinition.ISOLATION_READ_UNCOMMITTED,
TransactionDefinition.ISOLATION_READ_COMMITTED,
TransactionDefinition.ISOLATION_REPEATABLE_READ,
TransactionDefinition.ISOLATION_SERIALIZABLE,
TransactionDefinition.getIsolationLevel()
@Nullable public Integer getCurrentTransactionIsolationLevel()
nullif none
TransactionDefinition.ISOLATION_READ_UNCOMMITTED,
TransactionDefinition.ISOLATION_READ_COMMITTED,
TransactionDefinition.ISOLATION_REPEATABLE_READ,
TransactionDefinition.ISOLATION_SERIALIZABLE,
TransactionDefinition.getIsolationLevel()
public void setActualTransactionActive(boolean active)
active-
trueto mark the current context as being associated with an actual transaction;
falseto reset that marker
public boolean isActualTransactionActive()
To be called by resource management code that wants to discriminate between active transaction synchronization (with or without backing resource transaction; also on PROPAGATION_SUPPORTS) and an actual transaction being active (with backing resource transaction; on PROPAGATION_REQUIRED, PROPAGATION_REQUIRES_NEW, etc).
isSynchronizationActive()
public void clear()
clearSynchronization(),
setCurrentTransactionName(java.lang.String),
setCurrentTransactionReadOnly(boolean),
setCurrentTransactionIsolationLevel(java.lang.Integer),
setActualTransactionActive(boolean) | https://docs.spring.io/spring-framework/docs/5.2.0.BUILD-SNAPSHOT/javadoc-api/org/springframework/transaction/reactive/TransactionSynchronizationManager.html | 2019-12-06T03:35:26 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.spring.io |
This tutorial will demonstrate the basic concepts of the homography with some codes. For detailed explanations about the theory, please refer to a computer vision course or a computer vision book, e.g.:
The tutorial code can be found here. The images used in this tutorial can be found here (
left*.jpg).
Briefly, the planar homography relates the transformation between two planes (up to a scale factor):
\[} \]
The homography matrix is a
3x3 matrix but with 8 DoF (degrees of freedom) as it is estimated up to a scale. It is generally normalized (see also 1) with \( h_{33} = 1 \) or \( h_{11}^2 + h_{12}^2 + h_{13}^2 + h_{21}^2 + h_{22}^2 + h_{23}^2 + h_{31}^2 + h_{32}^2 + h_{33}^2 = 1 \).
The following examples show different kinds of transformation but all relate a transformation between two planes.
The homography can be estimated using for instance the Direct Linear Transform (DLT) algorithm (see 1 for more information). As the object is planar, the transformation between points expressed in the object frame and projected points into the image plane expressed in the normalized camera frame is a homography. Only because the object is planar, the camera pose can be retrieved from the homography, assuming the camera intrinsic parameters are known (see 2 or 4). This can be tested easily using a chessboard object and
findChessboardCorners() to get the corner locations in the image.
The first thing consists to detect the chessboard corners, the chessboard size (
patternSize), here
9x6, is required:
The object points expressed in the object frame can be computed easily knowing the size of a chessboard square:
The coordinate
Z=0 must be removed for the homography estimation part:
The image points expressed in the normalized camera can be computed from the corner points and by applying a reverse perspective transformation using the camera intrinsics and the distortion coefficients:
The homography can then be estimated with:
A quick solution to retrieve the pose from the homography matrix is (see 5):
\[ \begin{align*} \boldsymbol{X} &= \left( X, Y, 0, 1 \right ) \\ \boldsymbol{x} &= \boldsymbol{P}\boldsymbol{X} \\ &= \boldsymbol{K} \left[ \boldsymbol{r_1} \hspace{0.5em} \boldsymbol{r_2} \hspace{0.5em} \boldsymbol{r_3} \hspace{0.5em} \boldsymbol{t} \right ] \begin{pmatrix} X \\ Y \\ 0 \\ 1 \end{pmatrix} \\ &= \boldsymbol{K} \left[ \boldsymbol{r_1} \hspace{0.5em} \boldsymbol{r_2} \hspace{0.5em} \boldsymbol{t} \right ] \begin{pmatrix} X \\ Y \\ 1 \end{pmatrix} \\ &= \boldsymbol{H} \begin{pmatrix} X \\ Y \\ 1 \end{pmatrix} \end{align*} \]
\[ \begin{align*} \boldsymbol{H} &= \lambda \boldsymbol{K} \left[ \boldsymbol{r_1} \hspace{0.5em} \boldsymbol{r_2} \hspace{0.5em} \boldsymbol{t} \right ] \\ \boldsymbol{K}^{-1} \boldsymbol{H} &= \lambda \left[ \boldsymbol{r_1} \hspace{0.5em} \boldsymbol{r_2} \hspace{0.5em} \boldsymbol{t} \right ] \\ \boldsymbol{P} &= \boldsymbol{K} \left[ \boldsymbol{r_1} \hspace{0.5em} \boldsymbol{r_2} \hspace{0.5em} \left( \boldsymbol{r_1} \times \boldsymbol{r_2} \right ) \hspace{0.5em} \boldsymbol{t} \right ] \end{align*} \]
This is a quick solution (see also 2) as this does not ensure that the resulting rotation matrix will be orthogonal and the scale is estimated roughly by normalize the first column to 1.
A solution to have a proper rotation matrix (with the properties of a rotation matrix) consists to apply a polar decomposition (see 6 or 7 for some information):
To check the result, the object frame projected into the image with the estimated camera pose is displayed:
In this example, a source image will be transformed into a desired perspective view by computing the homography that maps the source points into the desired points. The following image shows the source image (left) and the chessboard view that we want to transform into the desired chessboard view (right).
The first step consists to detect the chessboard corners in the source and desired images:
The homography is estimated easily with:
To warp the source chessboard view into the desired chessboard view, we use cv::warpPerspective
The result image is:
To compute the coordinates of the source corners transformed by the homography:
To check the correctness of the calculation, the matching lines are displayed:
The homography relates the transformation between two planes and it is possible to retrieve the corresponding camera displacement that allows to go from the first to the second plane view (see [144] for more information). Before going into the details that allow to compute the homography from the camera displacement, some recalls about camera pose and homogeneous transformation.
The function cv::solvePnP allows to compute the camera pose from the correspondences 3D object points (points expressed in the object frame) and the projected 2D image points (object points viewed in the image). The intrinsic parameters and the distortion coefficients are required (see the camera calibration process).
\[ \begin{align*} s \begin{bmatrix} u \\ v \\ 1 \end{bmatrix} &= \begin{bmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} r_{11} & r_{12} & r_{13} & t_x \\ r_{21} & r_{22} & r_{23} & t_y \\ r_{31} & r_{32} & r_{33} & t_z \end{bmatrix} \begin{bmatrix} X_o \\ Y_o \\ Z_o \\ 1 \end{bmatrix} \\ &= \boldsymbol{K} \hspace{0.2em} ^{c}\textrm{M}_o \begin{bmatrix} X_o \\ Y_o \\ Z_o \\ 1 \end{bmatrix} \end{align*} \]
\( \boldsymbol{K} \) is the intrinsic matrix and \( ^{c}\textrm{M}_o \) is the camera pose. The output of cv::solvePnP is exactly this:
rvec is the Rodrigues rotation vector and
tvec the translation vector.
\( ^{c}\textrm{M}_o \) can be represented in a homogeneous form and allows to transform a point expressed in the object frame into the camera frame:
\[ \begin{align*} \begin{bmatrix} X_c \\ Y_c \\ Z_c \\ 1 \end{bmatrix} &= \hspace{0.2em} ^{c}\textrm{M}_o \begin{bmatrix} X_o \\ Y_o \\ Z_o \\ 1 \end{bmatrix} \\ &= \begin{bmatrix} ^{c}\textrm{R}_o & ^{c}\textrm{t}_o \\ 0_{1\times3} & 1 \end{bmatrix} \begin{bmatrix} X_o \\ Y_o \\ Z_o \\ 1 \end{bmatrix} \\ &= \begin{bmatrix} r_{11} & r_{12} & r_{13} & t_x \\ r_{21} & r_{22} & r_{23} & t_y \\ r_{31} & r_{32} & r_{33} & t_z \\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} X_o \\ Y_o \\ Z_o \\ 1 \end{bmatrix} \end{align*} \]
Transform a point expressed in one frame to another frame can be easily done with matrix multiplication:
To transform a 3D point expressed in the camera 1 frame to the camera 2 frame:
\[ ^{c_2}\textrm{M}_{c_1} = \hspace{0.2em} ^{c_2}\textrm{M}_{o} \cdot \hspace{0.1em} ^{o}\textrm{M}_{c_1} = \hspace{0.2em} ^{c_2}\textrm{M}_{o} \cdot \hspace{0.1em} \left( ^{c_1}\textrm{M}_{o} \right )^{-1} = \begin{bmatrix} ^{c_2}\textrm{R}_{o} & ^{c_2}\textrm{t}_{o} \\ 0_{3 \times 1} & 1 \end{bmatrix} \cdot \begin{bmatrix} ^{c_1}\textrm{R}_{o}^T & - \hspace{0.2em} ^{c_1}\textrm{R}_{o}^T \cdot \hspace{0.2em} ^{c_1}\textrm{t}_{o} \\ 0_{1 \times 3} & 1 \end{bmatrix} \]
In this example, we will compute the camera displacement between two camera poses with respect to the chessboard object. The first step consists to compute the camera poses for the two images:
The camera displacement can be computed from the camera poses using the formulas above:
The homography related to a specific plane computed from the camera displacement is:
On this figure,
n is the normal vector of the plane and
d the distance between the camera frame and the plane along the plane normal. The equation to compute the homography from the camera displacement is:
\[ ^{2}\textrm{H}_{1} = \hspace{0.2em} ^{2}\textrm{R}_{1} - \hspace{0.1em} \frac{^{2}\textrm{t}_{1} \cdot n^T}{d} \]
Where \( ^{2}\textrm{H}_{1} \) is the homography matrix that maps the points in the first camera frame to the corresponding points in the second camera frame, \( ^{2}\textrm{R}_{1} = \hspace{0.2em} ^{c_2}\textrm{R}_{o} \cdot \hspace{0.1em} ^{c_1}\textrm{R}_{o}^{T} \) is the rotation matrix that represents the rotation between the two camera frames and \( ^{2}\textrm{t}_{1} = \hspace{0.2em} ^{c_2}\textrm{R}_{o} \cdot \left( - \hspace{0.1em} ^{c_1}\textrm{R}_{o}^{T} \cdot \hspace{0.1em} ^{c_1}\textrm{t}_{o} \right ) + \hspace{0.1em} ^{c_2}\textrm{t}_{o} \) the translation vector between the two camera frames.
Here the normal vector
n is the plane normal expressed in the camera frame 1 and can be computed as the cross product of 2 vectors (using 3 non collinear points that lie on the plane) or in our case directly with:
The distance
d can be computed as the dot product between the plane normal and a point on the plane or by computing the plane equation and using the D coefficient:
The projective homography matrix \( \textbf{G} \) can be computed from the Euclidean homography \( \textbf{H} \) using the intrinsic matrix \( \textbf{K} \) (see [144]), here assuming the same camera between the two plane views:
\[ \textbf{G} = \gamma \textbf{K} \textbf{H} \textbf{K}^{-1} \]
In our case, the Z-axis of the chessboard goes inside the object whereas in the homography figure it goes outside. This is just a matter of sign:
\[ ^{2}\textrm{H}_{1} = \hspace{0.2em} ^{2}\textrm{R}_{1} + \hspace{0.1em} \frac{^{2}\textrm{t}_{1} \cdot n^T}{d} \]
We will now compare the projective homography computed from the camera displacement with the one estimated with cv::findHomography
The homography matrices are similar. If we compare the image 1 warped using both homography matrices:
Visually, it is hard to distinguish a difference between the result image from the homography computed from the camera displacement and the one estimated with cv::findHomography function.
OpenCV 3 contains the function cv::decomposeHomographyMat which allows to decompose the homography matrix to a set of rotations, translations and plane normals. First we will decompose the homography matrix computed from the camera displacement:
The results of cv::decomposeHomographyMat are:
The result of the decomposition of the homography matrix can only be recovered up to a scale factor that corresponds in fact to the distance
d as the normal is unit length. As you can see, there is one solution that matches almost perfectly with the computed camera displacement. As stated in the documentation:
As the result of the decomposition is a camera displacement, if we have the initial camera pose \( ^{c_1}\textrm{M}_{o} \), we can compute the current camera pose \( ^{c_2}\textrm{M}_{o} = \hspace{0.2em} ^{c_2}\textrm{M}_{c_1} \cdot \hspace{0.1em} ^{c_1}\textrm{M}_{o} \) and test if the 3D object points that belong to the plane are projected in front of the camera or not. Another solution could be to retain the solution with the closest normal if we know the plane normal expressed at the camera 1 pose.
The same thing but with the homography matrix estimated with cv::findHomography
Again, there is also a solution that matches with the computed camera displacement.
The homography transformation applies only for planar structure. But in the case of a rotating camera (pure rotation around the camera axis of projection, no translation), an arbitrary world can be considered (see previously).
The homography can then be computed using the rotation transformation and the camera intrinsic parameters as (see for instance 8):
\[ s \begin{bmatrix} x^{'} \\ y^{'} \\ 1 \end{bmatrix} = \bf{K} \hspace{0.1em} \bf{R} \hspace{0.1em} \bf{K}^{-1} \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} \]
To illustrate, we used Blender, a free and open-source 3D computer graphics software, to generate two camera views with only a rotation transformation between each other. More information about how to retrieve the camera intrinsic parameters and the
3x4 extrinsic matrix with respect to the world can be found in 9 (an additional transformation is needed to get the transformation between the camera and the object frames) with Blender.
The figure below shows the two generated views of the Suzanne model, with only a rotation transformation:
With the known associated camera poses and the intrinsic parameters, the relative rotation between the two views can be computed:
Here, the second image will be stitched with respect to the first image. The homography can be calculated using the formula above:
The stitching is made simply with:
The resulting image is: | https://docs.opencv.org/master/d9/dab/tutorial_homography.html | 2019-12-06T04:30:21 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.opencv.org |
Let's take a simple HelloWorld scenario where there is a BPMN process that prints out a 'Hello World!' message when a process instance is initiated. In this tutorial, you create a BPMN process using the WSO2 Enterprise Integrator (WSO2 EI) Tooling.
Before you begin,
-:
Select and download the relevant WSO2 EI tooling ZIP file based on your operating system from here and then extract the ZIP file.
The path to this folder is referred to as
<EI_TOOLING>throughout this tutorial.
Getting an error message? See the troubleshooting tips given under Installing Enterprise Integrator Tooling.
Creating the artifacts
Follow the steps below to create the requires artifacts.
Creating the BPMN project
- Create a new BPMN project named HelloWorldBPMN. For instructions, see Creating a BPMN Project.
- Create a BPMN Diagram named HelloWorld.bpmn. For instructions, see Creating the BPMN Diagram.
Add a Start Event, Service Task, and End Event and connect them as shown below to create a basic process.
You view the Create connection option when you hover the mouse pointer on an artifact. Click on the arrow, drag it and drop it on the artifact to which you want to connect it.
- Click anywhere on the canvas, go to the Properties tab, and fill in the following details:
Id :
helloworld
Name :
Hello World Process
Namespace:
Creating the Maven project
The BPMN Project includes only the model of the Service Task. You need to create the implementation of it separately in a Maven Project. Follow the steps below to create the Maven Project for the Service Task.
- Create a Maven project for the HelloWorld Service Task by navigating to File > New > Other and searching for the Maven Project Wizard. Click Next.
- Select Create a simple project (skip archetype section) option and click Next.
- Enter the following details and click Finish.
Group Id:
org.wso2.bpmn
Artifact Id:
HelloWorldServiceTask
Click Open Perspective on the below message, which pops up.
You will not get this message if you are already in the Activiti perspective. You can view the current perspective from the below tab.
- Adding external JARs to the Service Task:
- In the Project Explorer, right click on the project HelloWorldServiceTask and select Properties.
- In the window that opens up click, Java Build Path, go to the Libraries tab and click on Add External JARs.
- Select the
activiti-all_5.21.0.wso2v1.jarfile from the
<EI_Home>/wso2/components/pluginsdirectory.
- Click Open and then click Apply and Close.
Creating the Java Package for the Maven Project
- Navigate to File -> New -> Other and search for the Package wizard to create a Java package and click Next.
- Create a package named
org.wso2.bpmn.helloworld.v1, and click Finish.
Creating the Java Class for the Maven Project
Navigate to File -> New -> Class to create a Java Class for HelloWorld Service task implementation.
Create a class named HelloWorldServiceTaskV1 and add
org.activiti.engine.delegate.JavaDelegateinterface to your class.
Click Finish.
Implement the business logic of the HelloWorld Service Task in the
HelloWorldServiceTaskV1.javafile as shown below. { public void execute(DelegateExecution arg0) throws Exception { System.out.println("Hello World ...!!!"); } }
Configure HelloWorld Service Task Class name.
To do this go to your HelloWorld BPMN diagram and select the Hello World Service Task box in the diagram.
- Access the Properties tab and select the Main Config tab.
For the Class name field, select
HelloWorldServiceTaskV1and save all changes.
Best PracticeClick here for best practices...
When you create a Java Service Task, ensure that you version your java package or classes by adding a version number in the Java Package path or Class name. This is useful when you have multiple versions of the same workflow, and when you want to change Service task business logic in each process version. Having versions avoids business logic changes in service tasks from affecting new or running process instances that are created from old process versions.
The following example demonstrates why it is important to version your java package:Version 1 { @Override public void execute(DelegateExecution arg0) throws Exception { System.out.println("Hello World ...!!!"); } }Version 2
package org.wso2.bpmn.helloworld.v2; import org.activiti.engine.delegate.DelegateExecution; import org.activiti.engine.delegate.JavaDelegate; /** * Hello World Service Task - Version 2. */ public class HelloWorldServiceTaskV2 implements JavaDelegate { @Override public void execute(DelegateExecution arg0) throws Exception { // In version 2, Hello World string is improved. System.out.println("Hello World ...!!! This is Second version of HelloWorld BPMN process."); } }
Note
If you want to use business rules in a BPMN process, you can create a sequence with the Rule Mediator via the ESB Profile, expose it as a service, and then use the BPMN REST task or BPMN SOAP task to invoke the service.Alternatively, you can use a BPMN service task to perform business rule validations.
Press Ctrl+S to save all your artifacts.
Deploying the artifacts
Follow the steps below to deploy the artifacts.
Deploying artifacts of the BPMN Project
- For instructions on creating the deployable artifacts, see Creating the deployable archive.
- For instructions on deploying them, see Deploying BPMN artifacts.
Deploying artifacts of the Maven Project
Add the following dependency to the
pom.xmlfile of the Service Task as shown below.
<packaging>jar</packaging> <dependencies> <dependency> <groupId>org.activiti</groupId> <artifactId>activiti-engine</artifactId> <version>5.17.0</version> </dependency> </dependencies>
In the Package Explorer, right click on the HelloWorldServiceTask, and click Run as → 7 Maven install.
This builds the
<ECLIPSE-WORKSPACE>/HelloWorldServiceTask directory and creates a compressed JAR file. The
HelloWorldServiceTask-1.0.0.jarfile is created in the
<eclipse-workspace>/HelloWorldServiceTask/targetdirectory.
If you are unsure of the path, right-click
HelloWorldServiceTask, and click Properties. The path is listed under Location.
You can view is the build is successful in the logs printed on the
pom.xmltab.
- Copy the
HelloWorldServiceTask-1.0.0.jarfile to the
<EI_HOME>/libdirectory.
- Restart the Business Process profile of WSO2 EI.
Testing the output
Follow the steps below to test the output.
- Log into the BPMN-explorer at using
adminfor both the username and password.
- Click Start to start the Hello World Process.
- In the terminal, the
"Hello World ...!!!"string is printed out. | https://docs.wso2.com/display/EI630/Working+with+Service+Tasks | 2019-12-06T02:57:15 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.wso2.com |
Message-ID: <1932854544.96499.1575607236529.JavaMail.j2ee-conf@bmc1-rhel-confprod1> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_96498_1041382218.1575607236528" ------=_Part_96498_1041382218.1575607236528 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
This monitoring profile helps you mo= nitor the following areas related to the Top-N Process Monitor. Each of the= se areas is a monitor type that is enabled for default monitoring. Click th= e listed link for information about the monitor type and the attributes ass= ociated with it. Click Add to directly apply thi= s profile and enable default monitoring.
Some of these monitor types need to be manua= lly configured and require user input; for information about the configurat= ion steps, click the link next to the monitor type. | https://docs.bmc.com/docs/exportword?pageId=511346378 | 2019-12-06T04:40:36 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.bmc.com |
meta data for this page
/api/administrators/ID
This feature is scheduled for release by the end of November 2018
HEAD, GET
List the details of the administrator in question.
Syntax
GET /api/administrators/ID Host: apply.example.edu Authorization: DREAM apikey="..."
Response headers
Content-Type: application/json Content-Length: 1234
Response example
{ "name": "Joe Smith", "email": "[email protected]", "phone": "123456789", "function": "Head of Admissions" } | https://docs.dreamapply.com/doku.php?id=api:api_administrators_id&rev=1542392284 | 2019-12-06T03:02:37 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.dreamapply.com |
Join Public Testnet
Instructions on how to join an existing public testnet.
As soon as we start the public testnet event you would be able to see the genesis file and other required configurations and seed-nodes here.
Join public testnet¶
Step 1: Get Heimdall genesis config¶
$ git clone //NOTE: Do make sure to join the relevant folder $ cd public-testnets/<testnet version> // Example: $ cd public-testnets/CS-1001 // copy genesis file to config directory $ cp heimdall-genesis.json ~/.heimdalld/config/genesis.json // copy config file to config directory $ cp heimdall-config.toml ~/.heimdalld/config/heimdall-config.toml // Generate ropsten api key if you don't have one. // Generate API key using: // NOTE: Add your api key in ~/.heimdalld/config/heimdall-config.toml under the key "eth_RPC_URL"
Do check the checksums of the files from here:
Step 2: Configure peers for Heimdall¶
Peers are the other nodes you want to sync to in order to maintain your full node. You can add peers separated by commas in file at
~/.heimdalld/config/config.toml under
persistent_peers with the format
NodeID@IP:PORT or
NodeID@DOMAIN:PORT
Refer to
heimdall-seeds.txt for peer info in your testnet folder.
Step 3: Start & sync Heimdall¶
You can start heimdall and other associated services like rest-server now using the link below!
Click here to understand how you can Run Heimdall. NOTE: If you are starting heimdall after a crash or simply changed genesis files you need to reset heimdall before moving forward.
Step 4: Initialise genesis block for Bor¶
// go to bor-config directory $ cd bor-config // Using genesis file of validator bor node $ cp ../<testnet version>/bor-genesis.json genesis.json // initialize Genesis Block $ $GOPATH/src/github.com/maticnetwork/bor/build/bin/bor --datadir dataDir init genesis.json
Step 5: Configure peers for Bor¶
To sync blocks on the testnet, you need to add peers. The file
static-nodes.json contains information for all the availalble seed nodes. Let's copy this file to your datadir so that when you start your nodes you already have peers!
$ cp static-nodes.json ../bor-config/dataDir/bor/
Step 6: Start Bor¶
$ bash start.sh
Your
bor-node should be syncing now! Checkout
logs/bor.log to get to the logs 🤩
Step 7: Query data¶
To see examples on how to query your full node and get network status, please refer here: | https://docs.matic.network/staking/join-public-testnet/ | 2019-12-06T03:29:17 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.matic.network |
AppointmentItem.RTFBody property (Outlook)
Returns or sets a Byte array that represents the body of the Microsoft Outlook item in Rich Text Format. Read/write.
Syntax
expression.
RTFBody
expression A variable that represents an 'AppointmentItem' object.
Remarks
You can use the StrConv function in Microsoft Visual Basic for Applications (VBA), or the System.Text.Encoding.AsciiEncoding.GetString() method in C# or Visual Basic to convert an array of bytes to a string.
Example
The following code samples in Microsoft Visual Basic for Applications (VBA) and C# displays the Rich Text Format body of the appointment in the active inspector. An AppointmentItem must be the active inspector for this code to work.
Sub GetRTFBodyForMeeting() Dim oAppt As Outlook.AppointmentItem Dim strRTF As String If Application.ActiveInspector.CurrentItem.Class = olAppointment Then Set oAppt = Application.ActiveInspector.CurrentItem strRTF = StrConv(oAppt.RTFBody, vbUnicode) Debug.Print strRTF End If End Sub
private void GetRTFBodyForAppt() { if (Application.ActiveInspector().CurrentItem is Outlook.AppointmentItem) { Outlook.AppointmentItem appt = Application.ActiveInspector().CurrentItem as Outlook.AppointmentItem; byte[] byteArray = appt.RTFBody as byte[]; System.Text.Encoding encoding = new System.Text.ASCIIEncoding(); string RTF = encoding.GetString(byteArray); Debug.WriteLine(RTF); } }
See also
Support and feedback
Have questions or feedback about Office VBA or this documentation? Please see Office VBA support and feedback for guidance about the ways you can receive support and provide feedback. | https://docs.microsoft.com/en-us/office/vba/api/outlook.appointmentitem.rtfbody | 2019-12-06T03:41:59 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.microsoft.com |
Framework
Element.
Framework Tag Element.
Framework Tag Element.
Framework Tag Element.
Property
Tag
Definition
public : Platform::Object Tag { get; set; }
winrt::Windows::Foundation::IInspectable Tag(); void Tag(winrt::Windows::Foundation::IInspectable tag);
public object Tag { get; set; }
Public ReadWrite Property Tag As object
<frameworkElement> <frameworkElement.Tag> object* </frameworkElement.Tag> </frameworkElement>.
Feedback | https://docs.microsoft.com/en-us/uwp/api/windows.ui.xaml.frameworkelement.tag | 2019-12-06T03:35:20 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.microsoft.com |
Table of Contents
The Barcode Completion feature gives users the ability to only enter the unique part of patron and item barcodes. This can significantly reduce the amount of typing required for manual barcode input. This feature was also known as Lazy Circ at one point.
This feature can also be used if there is a difference between what the barcode scanner outputs and what is stored in the database, as long as the barcode that is stored has more characters then what the scanner is outputting. Barcode Completion is additive only, you cannot use it match a stored barcode that has less characters than what is entered. For example, if your barcode scanners previously output a123123b and now exclude the prefix and suffix, you could match both formats using Barcode Completion rules.
Because this feature adds an extra database search for each enabled rule to the process of looking up a barcode, it can add extra delays to the check-out process. Please test in your environment before using in production.
Released: 2.2 - June 2012
Local Administrator permission is needed to access the admin interface of the Barcode Completion feature.
Each rule requires an owner org unit, which is how scoping of the rules is handled. Rules are applied for staff users with the same org unit or descendants of that org unit. | http://docs-testing.evergreen-ils.org/docs/2.10/_barcode_completion.html | 2017-08-16T15:20:01 | CC-MAIN-2017-34 | 1502886102307.32 | [] | docs-testing.evergreen-ils.org |
Table of Contents
Evergreen includes a self check interface designed for libraries that simply want to record item circulation without worrying about security mechanisms like magnetic strips or RFID tags.
The self check interface runs in a web browser. Before patrons can use the self check station, a staff member must initialize the interface by logging in.
https://[hostname]/eg/circ/selfcheck/main, where [hostname] represents the host name of your Evergreen web server.
When the self check prints a receipt, the default template includes the library’s hours of operation in the receipt. If the library has no configured hours of operation, the attempt to print a receipt fails and the browser hangs.
Several library settings control the behavior of the self check:
Audio Alerts: Plays sounds when events occur in the self check. These
events are defined in the
templates/circ/selfcheck/audio_config.tt2
template. To use the default sounds, you could run the following command
from your Evergreen server as the root user (assuming that
/openils/ is your install prefix):
cp -r /openils/var/web/xul/server/skin/media/audio /openils/var/web/.
config.copy_statusdatabase table.
?ws=[workstation]parameter, where [workstation] is the name of a registered Evergreen workstation, or the staff member must register a new workstation when they login. The workstation parameter ensures that check outs are recorded as occurring at the correct library. | http://docs-testing.evergreen-ils.org/docs/2.10/_self_checkout.html | 2017-08-16T15:14:34 | CC-MAIN-2017-34 | 1502886102307.32 | [] | docs-testing.evergreen-ils.org |
Display prices without decimal values
Sometimes, conversions result in strange prices values that are not typically seen on retail stores (for example, $3.00 --> $4.39). If you'd rather display the converted prices without decimal values, follow the steps below.
1. Starting from your Shopify admin dashboard, click on Online Store, then click Themes.
2. Find the theme you want to edit, click the Actions button, then click Edit HTML/CSS.
3. On the left side, under the Layout heading, click on the theme.liquid file.
4. Copy the snippet below.
<script type="text/javascript"> var Shoppad = { apps: { coin: { config: { moneyFormat: 'amount_no_decimals' } } } }; </script>
5. Paste it directly after the <head> tag.
4. Save your changes.
Note: This will not affect your base currency, as it's formatting is controlled by Shopify. Check out this Shopify article for information on how to update your base currency's formatting settings. | http://docs.theshoppad.com/article/118-display-prices-without-decimal-values | 2017-08-16T15:11:23 | CC-MAIN-2017-34 | 1502886102307.32 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/555e25e4e4b027e1978e1c9a/images/596d6518042863033a1b2d6c/file-LfkopedQkW.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/555e25e4e4b027e1978e1c9a/images/56dfb16d9033601b7c7dd1b1/file-VL3aCibXJp.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/555e25e4e4b027e1978e1c9a/images/5863205490336009736bf83f/file-hTo7PAFz7d.png',
None], dtype=object) ] | docs.theshoppad.com |
Argo CD Image Updater¶
A tool to automatically update the container images of Kubernetes workloads that are managed by Argo CD.
A note on the current status
Argo CD Image Updater is under active development.
You are welcome to test it out on non-critical environments, and to contribute by sending bug reports, enhancement requests and - most appreciated - pull requests.
There will be (probably a lot of) breaking changes from release to release as development progresses until version 1.0. We will do our best to indicate any breaking change and how to un-break it in the respective release notes
Overview¶
The Argo CD Image Updater can check for new versions of the container images
that are deployed with your Kubernetes workloads and automatically update them
to their latest allowed version using Argo CD. It works by setting appropriate
application parameters for Argo CD applications, i.e. similar to
argocd app set --helm-set image.tag=v1.0.1 - but in a fully automated
manner.
Usage is simple: You annotate your Argo CD
Application resources with a list
of images to be considered for update, along with a version constraint to
restrict the maximum allowed new version for each image. Argo CD Image Updater
then regularly polls the configured applications from Argo CD and queries the
corresponding container registry for possible new versions. If a new version of
the image is found in the registry, and the version constraint is met, Argo CD
Image Updater instructs Argo CD to update the application with the new image.
Depending on your Automatic Sync Policy for the Application, Argo CD will either automatically deploy the new image version or mark the Application as Out Of Sync, and you can trigger the image update manually by syncing the Application. Due to the tight integration with Argo CD, advanced features like Sync Windows, RBAC authorization on Application resources etc. are fully supported.
Features¶
- Updates images of apps that are managed by Argo CD and are either generated from Helm or Kustomize tooling
- Update app images according to different update strategies
semver: update to highest allowed version according to given image constraint,
latest: update to the most recently created image tag,
name: update to the last tag in an alphabetically sorted list
- Default support for public images on widely used container registries:
- Docker Hub (docker.io)
- Google Container Registry (gcr.io)
- Red Hat Quay (quay.io)
- GitHub Container Registry (docker.pkg.github.com)
- Support for private container registries via configuration
- Ability to filter list of tags returned by a registry using matcher functions
- Support for custom, per-image pull secrets (using generic K8s secrets, K8s pull secrets or environment variables)
- Runs in a Kubernetes cluster or can be used stand-alone from the command line
- Ability to perform parallel update of applications
Limitations¶
The two most important limitations first. These will most likely not change anywhere in the near future, because they are limitations by design.
Please make sure to understand these limitations, and do not send enhancement requests or bug reports related to the following:
The applications you want container images to be updated must be managed using Argo CD. There is no support for workloads not managed using Argo CD.
Argo CD Image Updater can only update container images for applications whose manifests are rendered using either Kustomize or Helm and - especially in the case of Helm - the templates need to support specifying the image's tag (and possibly name) using a parameter (i.e.
image.tag).
Otherwise, current known limitations are:
- Image pull secrets must exist in the same Kubernetes cluster where Argo CD Image Updater is running in (or has access to). It is currently not possible to fetch those secrets from other clusters.
Questions, help and support¶
If you have any questions, need some help in setting things up or just want to discuss something, feel free to
open an issue on our GitHub issue tracker or
join us in the
#argo-cd-image-updaterchannel on the CNCF slack | https://argocd-image-updater.readthedocs.io/en/release-0.11/ | 2021-11-27T07:43:03 | CC-MAIN-2021-49 | 1637964358153.33 | [] | argocd-image-updater.readthedocs.io |
Numbered List in Text Document
In this article, we will use GroupDocs.Assembly to generate a Numbered List report in Text Document format.
The code uses some of the objects defined in The Business Layer.
This feature is supported by version 17.03 or greater.
Numbered List in Text Document
Reporting Requirement
As a report developer, you are required to describe the services you are providing with the following key requirements:
- The report must show the products in the numbered list.
- The report must be generated in the Text Document.
Adding Syntax to be evaluated by GroupDocs.Assembly Engine
We provide support for the following products: <<foreach [in products]>><<[NumberOf()]>>. <<[ProductName]>> <</foreach>>
For detailed technical information about syntax, expressions and report generation by the engine, please visit: Working with GroupDocs.Assembly Engine.
Download Numbered List Template
Please download the sample Numbered List document we created in this article: | https://docs.groupdocs.com/assembly/net/numbered-list-in-text-document/ | 2021-11-27T08:18:37 | CC-MAIN-2021-49 | 1637964358153.33 | [] | docs.groupdocs.com |
Welcome to the Mobiscroll API Reference. You can find the most up to date information about usage and customization options here.
Having trouble? Ask for help.
Getting Started with Mobiscroll for Angular 2 or Ionic 3 please follow this guide.
Using the Mobiscroll CLI and NPM
Step 1: Create an app
If you don't have an app at hand, create a starter with the Angular CLI (sudo access might be required).
$ ng new my-starter-app $ cd my-starter-app --version=5
--versionflag refers to the Mobiscroll version.
The command will ask for your credentials (email and password). If you already have a mobiscroll account but you are unsure about the details, you find them on your account page here.. If you don't have an account yet, you can start a trial in no time following these steps.
Step 3: Let's see if Mobiscroll was installed correctly
To test it let's add a simple input to one of your pages, like
src/app/app.component.html
<mbsc-form> <mbsc-input [(ngModel)]="myName">Name</mbsc-input> <mbsc-date [(ngModel)]="myBirthday">Birthdate</mbsc-date> </mbsc-form>
To build the app just run the serve command in the CLI:
$ ng serve --open
If you are using multiple @NgModules in your app, the MbscModule should be imported in all modules you want to use the components in.
See how to import Mobiscroll to other modules.
Setting up a downloaded Mobiscroll package in your Angular app
If you don't have access to the full framework of Mobiscroll components, or you don't want to include the whole Mobiscroll library from NPM to your project, your other option is to use the Download Builder.
The Mobiscroll Download builder let's you customize packages by cherry picking the components, themes, custom themes and icon packs you actually need. This also helps to reduce the package size, thus speeding up the loading times of your apps.
Step 1. Download a custom package from the download page
Go ahead to the download page, select the components, themes and font icon packs you need, and hit the download button.
NOTE: If you have access to more frameworks (depending on your licenses) you should also select the Angular framework there.
After dowloading the package, unzip it and copy the
lib folder from the package to the
src folder of your Angular app.
Step 2. Configure your project using the CLI
$ npm install -g @mobiscroll/cli
The Mobiscroll CLI will do the rest of the work. It will add the Mobiscroll Module, to your
app.module.ts and it will also set up the stylesheets depending on which format you choose.
You can choose between
CSS or
SCSS depending on how much customization you need. If these terms don't tell you much, don't worry, just stick with the
CSS.
To start the setup, run the config command in your project's root folder with the
--no-npm flag:
$ mobiscroll config angular --no-npm
Step 3. Importing Mobiscroll to other modules (Optional)
When you have only one module (the
app.module.ts), the previous configuration process will add an import for the Mobiscroll module there, otherwise it will ask you to choose the Modules you want to use Mobiscroll in.
If you add more modules later, or decide that you need the mobiscroll components in a module you didn't add in the configuration phase, you can add the imports manually like this:
import { NgModule } from '@angular/core'; import { CommonModule } from '@angular/common'; import { MbscModule } from '@mobiscroll/angular'; @NgModule({ declarations: [], imports: [ CommonModule, MbscModule // <-- added the Mobiscroll Module to the imports ] }) export class NewModuleModule { }
At this point you should be able to use the Mobiscroll Components and Directives in your app.
Step 4. Test the setup with a simple example
You can add components to your templates, for example using the Date & Time component should look like this:
<mbsc-datepicker [(ngModel)]="birthday"></mbsc-datepicker>
@Component({ selector: 'my-app' }) export class AppComponent { birthday: Date = new Date(1988, 1, 28); }
You can check out our demos section for more usage examples.
Passing options
Every component can be tuned to your needs, by giving certain options to them. You can pass these options with the
[options] attribute.
Also, you can pass each option inidividually as an attribute to any component. Here's an example:
<div> <input [(ngModel)]="birthday" mbsc-datepicker [options]="birthdayOptions" /> <mbsc-datepicker [(ngModel)]="birthday" [options]="birthdayOptions">My Birthday</mbsc-datepicker> <mbsc-datepicker [(ngModel)]="birthday" [display]="birthDisplay" [theme]="birthTheme">My Birthday</mbsc-datepicker> </div>
@Component({ selector: 'my-app' }) export class AppComponent { birthday: Date = new Date(1988, 1, 28); birthdayOptions = { display: 'bottom', theme: 'ios' }; birthDisplay: 'bottom'; birthTheme: 'ios'; }
Calling instance methods
Sometimes you may want to access the component instances, to call their methods. For example you could show a component programmatically by calling the
.open() method of its instance.
Mobiscroll directives are exported as
"mobiscroll", so you can use them as template variables.
@Component({ selector: 'my-app', template: `<div> <-- using a directive --> <input [(ngModel)]="birthday" mbsc-datepicker # <button (click)="myPickerDirective.open()">Open Picker of Directive</button> <-- using a component --> <mbsc-datepicker [(ngModel)]="birthday" #myPickerComponent></mbsc-datepicker> <button (click)="myPickerComponent.open()">Open Picker of Component</button> </div> `, }) export class AppComponent { birthday: Date = new Date(1988, 5, 24); }
import { Component, ViewChild } from '@angular/core'; import { MbscDatepicker } from '@mobiscroll/angular'; @Component({ selector: 'my-app', template: `<div> <mbsc-datepicker [(ngModel)]="birthday" #myPickerComponent></mbsc-datepicker> <button (click)="myOpen()">Open</button> </div> `, }) export class AppComponent { @ViewChild('myPickerComponent', { static: false }) datepickerInstance: MbscDatepicker; open() { this.datepickerInstance.open(); } birthday: Date = new Date(1988, 5, 24); }. | https://docs.mobiscroll.com/5-12-1/angular/getting-started | 2021-11-27T09:24:20 | CC-MAIN-2021-49 | 1637964358153.33 | [] | docs.mobiscroll.com |
Optionally, each of your StorageGRID system's data center sites can be deployed with an Archive Node, which allows you to connect to a targeted external archival storage system, such as Tivoli Storage Manager (TSM).
After configuring connections to the external target, you can configure the Archive Node to optimize TSM performance, take an Archive Node offline when a TSM server is nearing capacity or unavailable, and configure replication and retrieve settings. You can also set Custom alarms for the Archive Node. | https://docs.netapp.com/sgws-114/topic/com.netapp.doc.sg-admin/GUID-EB99030A-F22A-4D13-8422-8006F4893738.html | 2021-11-27T09:46:37 | CC-MAIN-2021-49 | 1637964358153.33 | [] | docs.netapp.com |
$ oadm <action> <option>
This topic provides information on the administrator CLI operations and their syntax. You must setup and login with the CLI before you can perform these operations.
The
oadm command is used for administrator CLI operations, which is a symlink
that can be used on hosts that have the
openshift binary, such as a master. If
you are on a workstation that does not have the
openshift binary, you can also
use
oc adm in place of
oadm, if
oc is available.
The administrator CLI differs from the normal set of commands under the
developer CLI, which uses the
oc command, and
is used more for project-level operations.
The administrator CLI allows interaction with the various objects that are
managed by OpenShift Enterprise. Many common
oadm operations are invoked using the
following syntax:
$ oadm <action> <option>
This specifies:
An
<action> to perform, such as
new-project or
router.
An available
<option> to perform the action on as well as a value for the
option. Options include
--output.
Creates a bootstrap project template:
$ oadm create-bootstrap-project-template
Creates the default bootstrap policy:
$ oadm create-bootstrap-policy-file | https://docs.openshift.com/enterprise/3.2/cli_reference/admin_cli_operations.html | 2021-11-27T08:17:26 | CC-MAIN-2021-49 | 1637964358153.33 | [] | docs.openshift.com |
Introduction to Reports
Reports are a powerful tool to understand how users are interacting with your platform. You can quickly view Student progress in all of your Courses. This can help you spot patterns or potential bottlenecks, and redesign Course curriculum if needed. You can also get a clear view of your revenue, and use that information to drive your earnings forward.
The following LMS Reports are available for use: Overview, Course Insight, and Student Insight.
And the following eCommerce Reports are available: Yearly Sales, Monthly Sales, Product Breakdown, and Export Orders.
Each of these reports can help you quickly organize and visualize the data for both your Courses and your platform.
Understanding platform reports will require the following steps: | https://docs.academyofmine.com/article/183-introduction-to-reports | 2021-11-27T08:54:42 | CC-MAIN-2021-49 | 1637964358153.33 | [] | docs.academyofmine.com |
Worker System
Workers are created and executed in a parent-child hierarchy. A
WorkerSystem instance is always at the root of the hierarchy,
with an arbitrary number of
descendant workers and levels. The
WorkerSystem
instance also handles configuration settings,
logging, and the running of the worker system.
To create and run a worker system, at a minimum your code:
- Creates an instance of the WorkerSystem class, which is always at the root of the worker hierarchy
- Specifies what work the worker system should perform when it runs, which usually includes creating child workers
- Starts the worker system, in the foreground or the background
- When the worker system has completed, acts on the success or failure outcome
The following sections covers these steps in detail.
1. Create
WorkerSystem Instance
Creating the
WorkerSystem instance loads configuration settings, starts logging etc.,
but does not 'start' the worker system.
Most actionETL applications will run a single
WorkerSystem instance, but running
multiple instances, including in parallel, is also supported.
2. Specify Work
There are four ways to specify what the worker system should do, with the first one being the most common one:
a. Pass a callback to
Root()
- Delays the execution of the callback (and the creation of any workers) until the worker system runs
b. Add child worker(s) to the
WorkerSystem instance
- Creates workers before the worker system runs
c. Add callbacks to the
WorkerSystem instance
- Execute logic before any
Root()callback runs, or after any
Root()callback and children run
d. Create a custom reusable worker system type
- Inherit from WorkerSystemBase<TDerived> or WorkerSystem, and override RunAsync()
It's possible to combine approaches with the same worker system instance.
2a. Pass Callback to
Root()
Passing a lambda callback (which the system will later run, see Getting Started) to one of the Root(Action<WorkerSystem>) overloads is often the best choice, since it requires the least amount of code, avoids or delays creating workers as much as possible, and the visual structure of the code (i.e. indentation and blocks within curly braces) automatically reflects the structure of the worker hierarchy.
The callback receives a reference to its parent
WorkerSystem instance,
which e.g. allows access to configuration settings, and is needed to create child workers.
The callback code is regular .NET code, and will run without any particular restrictions.
This example checks if a file exists, and exits with a success or failure code. It uses the (optional) fluent coding style of chaining method calls:(); } } }
Alternatively, you can pass in an existing method by casting to the appropriate overload:
//using actionETL; //using System; private static void CreateWorkers(WorkerSystem workerSystem) { _ = new FileExistsWorker(workerSystem, "File exists", workerSystem.Config["TriggerFile"]); } public static void RunExample() { new WorkerSystem() .Root((Action<WorkerSystem>)CreateWorkers) .Start() .ThrowOnFailure(); }
The available overload casts are:
2b. Add Child Workers to
WorkerSystem Instance
When workers are created they are always added to a specific parent, which can be the
WorkerSystem instance or another worker:
public static void RunExample() { var workerSystem = new WorkerSystem(); _ = new FileExistsWorker(workerSystem, "File exists" , workerSystem.Config["TriggerFile"]); workerSystem .Start() .ThrowOnFailure(); }
While creating workers ahead of time is sometimes preferable, it does use up at least some resources earlier than otherwise. The difference in timing of the resource need is however usually negligible, since best practice when implementing a worker is to delay acquiring large resources until it actually runs.
2c. Add Callbacks to the
WorkerSystem Instance
It is sometimes convenient to add initialization and cleanup logic as separate callbacks, rather than embed it with child workers and start constraints:
var workerSystem = new WorkerSystem() .Root(ws => { _ = new FileExistsWorker(ws, "File exists", ws.Config["TriggerFile"]); // Create other child workers... }); bool skipProcessing = false; // Other logic... workerSystem .AddStartingCallback(ws => skipProcessing ? ProgressStatus.SucceededTask : ProgressStatus.NotCompletedTask) .AddCompletedCallback((ws, os) => { // Perform cleanup tasks... return OutcomeStatus.SucceededTask; }) .Start().ThrowOnFailure();
This can be particularly useful when adding callbacks from outside the worker system.
See AddStartingCallback(Func<TDerived, Task<ProgressStatus>>) and AddCompletedCallback(Func<TDerived, OutcomeStatus, Task<OutcomeStatus>>) for details.
2d. Create a Custom Reusable Worker System Type
Creating a custom worker system type can be useful to e.g.:
- Run the same complete worker system logic in different places, including across different applications
- Add properties and methods to the worker system, e.g. to surface aspects of the contained user logic
Inherit from WorkerSystemBase<TDerived> or (if you want to retain the
Root() overloads)
WorkerSystem. You must also implement RunAsync()
with your custom logic.
3. Start the Worker System
When started, the worker system:
- Runs the callback passed to
Root(), if any, which optionally creates workers
- Runs the children of
WorkerSystem, if any and their start constraints allow it
- For any worker that runs, recursively also attempt to run its children
The worker system completes when:
Root()and any other callbacks has completed, and
- No workers are running, and no new worker can be started
Worker Execution describes the execution phase in detail.
Foreground
Worker systems in console programs (or more exactly, if
SynchronizationContext.Current == null) can be started in the foreground using
Start(). When the method returns, the worker system
is completed, and the returned SystemOutcomeStatus value describes
the success or failure of the worker system:
var workerSystem = new WorkerSystem() .Root(ws => { // ... } ); workerSystem.Start().Exit();
Background
All types of programs (including console programs), irrespective of
SynchronizationContext.Current, can start the worker system in the
background by calling StartAsync(). It returns a
Task<SystemOutcomeStatus> (see Task<TResult>), and when this task
completes, the worker system is completed, with the task result as an SystemOutcomeStatus
that describes the success or failure of the worker system.
In an
async method, the worker system task is normally awaited with
await
(see Asynchronous programming with async and await
for background information).
Note
In C#7.1 and later,
the
Main() method of a console program can also be asynchronous and use
async.
In non-async methods, options include:
- Returning the
Taskfrom the method as in this example:
public static Task<SystemOutcomeStatus> RunExampleAsync() { return new WorkerSystem() .Root(ws => { _ = new FileExistsWorker(ws, "File exists", ws.Config["TriggerFile"]); }) .StartAsync(); }
Warning
The next two options should only be used at the top level of a synchronous program, otherwise dead-locks can occur. Async/Await - Best Practices in Asynchronous Programming covers this in depth.
- Calling
Task.Wait()and checking the
WorkerSystemStatus property
- Retrieving
Task<SystemOutcomeStatus>.Result, which automatically waits for the task to complete
4. Act on Success or Failure
The worker system catches and logs most exceptions, which means it's imperative to always check and act on the success or failure of the worker system after it has completed. Do this by retrieving and acting on the worker system status from one of the following:
- SystemOutcomeStatus from:
await myWorkerSystem.StartAsync().ConfigureAwait(false)
myWorkerSystem.Start()(in console programs only)
- WorkerParentStatus from:
myWorkerSystem.Statusproperty (also see Worker Execution)
OutcomeStatus and
WorkerParentStatus have similar members, e.g.:
IsSucceeded,
IsErroretc.
- Exit() to immediately exit your program with a success or failure exit code
- ThrowOnFailure() to throw an exception on any failure status | https://docs.envobi.com/articles/worker-system.html | 2021-11-27T09:36:35 | CC-MAIN-2021-49 | 1637964358153.33 | [] | docs.envobi.com |
Task Master uses a 7-day workweek. Saturdays and Sundays are not excluded from task schedules by default. In Task Master R3.0 and higher, you can define the workdays for your project. To configure workdays, edit the Task Master web part settings to select the days of the week your project should include:
The example below shows how a task is scheduled to exclude weekends.
Task Master is configured with Workdays of Monday through Friday. Task Duration is calculated based on Start Date and Due Date.
Create a new task with a Start Date and Due Date as shown below. Notice that the Due Date falls on a day that is not a workday.
- Start Date and Time: September 26, 2011 at 9 AM (Monday)
- Due Date and Time: October 1, 2011 at 5 PM (Saturday)
Click Recalculate. The Start and Due dates and times of the task become:
- Start Date and Time: September 27, 2011 at 9 AM (Monday)
- Due Date and Time: September 30, 2011 at 5 PM (Friday)
Since the workdays are Monday through Friday, Task Master ensures that your tasks are scheduled only on these days. In this example, the task Due Date is moved to the previous workday.
Return to Task Master Working Hours Settings | https://docs.bamboosolutions.com/document/configure_task_master_workdays/ | 2019-06-16T05:51:10 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['/wp-content/uploads/2017/06/hw45-2010-workhours9-5.jpg',
'hw45-2010-workhours9-5.jpg'], dtype=object)
array(['/wp-content/uploads/2017/06/hw45-2010-exampleworkday2.jpg',
'hw45-2010-exampleworkday2.jpg'], dtype=object) ] | docs.bamboosolutions.com |
Create a choropleth map
This is a modified version of the Basic template style.
A new
filllayer called
state-datahas been added.
The source data for the
state-datalayer comes from a custom tileset that has been uploaded to Mapbox Studio.
States are styled using property expressions to style features across a data range.
About this style
- Tileset from custom data: The data that is used as the
sourcefor the
state-datalayer comes from a custom tileset that was created by uploading a GeoJSON file to Mapbox Studio. This data is borrowed from the Leaflet choropleth tutorial and contains data on population density across US states. The tileset itself contains the geometry for each state and two properties:
name(a string) and
density(a number). Read more about uploading data to Mapbox Studio in the Overview section.
- Styling with expressions: The
state-datalayer is styled using property expressions. In this case, property expressions are being used on the Color property to style features across a data range. The color of each feature will be determined based on its
density.
Related resources
Looking for more guidance? Read our Make a choropleth map, part 1: create a style tutorial. | https://docs.mapbox.com/studio-manual/examples/choropleth-map/ | 2019-06-16T05:52:17 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.mapbox.com |
This document describes the properties of a published OData resource. If you want a general overview of OData services, you can check the Published OData Services documentation.
Adding or editing a resource
The Add resource or Edit button in the Published OData Service document will open an editor window for the published resource. The editor window is separated in two parts: Resource and Uniform Resource Identifier. The former lets you specify which entity and entity attributes to expose, whereas the latter lets you customize the location where the resource will be published.
Another way to add a resource is by right-clicking on an entity in the domain model and clicking Expose as OData resource. Doing so will prompt you to select or create a published OData service document to add the new resource to. After the document has been selected, the published resource editor will be displayed.
Resource
Press the Select… button to open a dialog window that allows you to select an entity from the domain model to publish. Click on an entity in the displayed tree and press Select to confirm.
Selecting exposed attributes and associations
When an entity to publish has been selected, press the Select… button to open a dialog that allows you to select individual attributes to expose.
The System.ID attribute is used as a key in OData services and is therefore not allowed to be unchecked.
Attributes of published entities are nillable by default, meaning that if their value is empty then they will be encoded as explicit nulls in the OData content. If you deselect the nillable column, the attribute cannot be empty (otherwise a runtime error would occur).
Attributes of type binary are not allowed to be exported through OData services, other than the Contents field of System.FileDocument.
Mapping from internal names to exposed names.
Use the Exposed entity name field to customize the name of your resource that is exposed to the outside world. By default, the name is the same as the name of the exposed entity from your domain model. You can change this to any name which starts with a letter followed by letters or digits with a maximum length of 480 characters. Be aware however that the location URIs must be unique. Exposing two different resources on the same location will result in a consistency error.
This also applies to attributes and associations. In the ‘Exposed attributes and associations’ screen, you can also override the exposed name here.
When these names have been overriden, the name of the entity, attribute or association as defined in your domain model will not be exposed to the outside world, for all OData communication the exposed name will be used.
These features make it easier to refactor your domain model without affecting external APIs.
Use paging
If you enable this option, you can set a maximum number of objects per response, with a link included to the next set of objects. A client like Tableau can use this to show progress and will automatically keep following the links until all data is retrieved. Memory usage of clients can be improved if you use paging with a reasonable page size.
Note that enabling this does mean that retrieved data can be inconsistent, because you’re no longer retrieving data within a single transaction. For example, you are sorting on an Age attribute in an entity called Customer and retrieving customers with 1000 objects per page. Now a customer gets deleted in between two calls, it means that the customer with Age 23 at position 1001 now moves to position 1000, meaning that this object that you would have gotten on the next page now moves to the first page and is not retrieved anymore. The other way around with data inserts in between calls can cause you to see duplicates. So only use this option when this kind of inconsistency is acceptable.
Default value: No
Page size
When Use paging is set to Yes, you can set the amount of objects per page here.
Default value: 10000 | https://docs.mendix.com/refguide5/published-odata-resource | 2019-06-16T04:29:01 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['attachments/12879490/13402542.png', None], dtype=object)
array(['attachments/12879490/13402466.png', None], dtype=object)] | docs.mendix.com |
Call Rest Action and JSON support in Mapping Documents are available since Mendix 6.6.0.
REST
Representational State Transfer (REST) is an approach to consume or expose resources. Over recent years it has gained popularity because of it’s simplicity, because no extensive schemas or contracts are needed to transfer data between endpoints. It uses
- HTTP URLs to locate resources,
- HTTP headers to authenticate and specify content types (such as XML or JSON)
- HTTP methods to identify operations on resources, such as GET (retrieve data) or POST (send data).
Lack of contracts and schemas give you an easy start using REST. Many REST endpoints return complex data however. The JSON Structure document helps with giving structure to JSON data: from an example JSON snippet, a lightweight schema is extracted that is used in Mapping Documents. An Import Mapping document converts JSON (or XML) to Mendix objects, and an Export Mapping document serializes Mendix objects to JSON (or XML).
JSON
JavaScript Object Notation (JSON) is a lightweight representation of data.
{ "name": "John Smith", "age": 23, "address": { "street": "Dopeylane 14", "city": "Worchestire" } }
Above the object ‘person’ is described with the corresponding values for the attributes ‘name’, ‘age’ and the referred object ‘address’.
Limitations
It is not possible to specify a timeout value.
Examples
How to consume REST natively with Mendix | https://docs.mendix.com/refguide6/consumed-rest-services | 2019-06-16T04:31:04 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.mendix.com |
contrib/geosites post_settings_root).
Activate geosites app¶
In order to use geonode in Multi-tenancy mode, follow these steps: 1. check in settings.py if ‘geonode.contrib.geosites’ in GEONODE_CONTRIB_APPS is rightly uncommented 2. add in settings.py ‘geonode.contrib.geosites’ in INSTALLED_APPS 3. run python manage.py syncdb
Adding New Sites¶
To add a new site follow the following steps:
- copy the directory site_template in your geonode-project folder and give it a name
- from geonode administration pannel add ‘new site’
- create a virtualhost in the webserver related to the new created site. Remember to setup the WSGIDeamonProcess with the name you gave to the folder created at point 1. and the path to the geosites directory. WSGIProcessGroup have to be pointed to the name you choose for the folder you created at point 1. Eventually, WSGIScriptAlias have to be set to the wsgi.py you have in your site folder.
- check the configuration files: local_settings.py, pre_settings.py, post_settings.py in /geonode-project as well as local_settings.py and settings.py in your site folder:
- in /geonode-project/local_settings.py set the variable SERVE_PATH. It has to point geosites folder.
- in the local_setting of the site folder insert the values to the following variables:
- SITE_ID
- SITE_NAME
- SITE_URL
- create static_root directory where you usually let the webserver serve webpages (e.g., /var/www ) and give it grants to be accessed by the user www-data
- create an uploaded/layers and uploaded/thumbs folder in your geonode-project folder and give them grants as follow:
- sudo mkdir -p geonode-project/uploaded/thumbs sudo mkdir -p geonode-project/uploaded/layers sudo chmod -Rf 755 geonode-project/uploaded/thumbs sudo chmod -Rf 755 geonode-project/uploaded/layers
- run python manage.py collectstatics - Pay attention on the folder where you are running the command and the folder structure of your geonode/geosites project, in case pass to the path the settings file by using –settings to the python command
- you can customize the look and feel of your site working on the css and html file you find in your template site directory. After a change, run again collectstatics command. | http://docs.geonode.org/en/stable/tutorials/advanced/geonode_production/geosites.html | 2019-06-16T04:49:40 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.geonode.org |
-
evaluating HTTP and TCP payload
The payload of an HTTP request or response consists of HTTP protocol information such as headers, a URL, body content, and version and status information. When you configure a default syntax expression to evaluate HTTP payload, you use a default syntax expression prefix and, if necessary, an operator.
For example, you use the following expression, which includes the
http.req.header(“<header_name>“) prefix and the exists operator, if you want to determine whether an HTTP connection includes a custom header named “myHeader”:
http.req.header("myHeader").exists
You can also combine multiple Advanced policy expressions with Boolean and arithmetic operators. For example, the following compound expression could be useful with various Citrix ADC features, such as Integrated Caching, Rewrite, and Responder. This expression first uses the && Boolean operator to determine whether an HTTP connection includes the Content-Type header with a value of “text/html.” If that operation returns a value of FALSE, the expression determines whether the HTTP connection includes a “Transfer-Encoding” or “Content-Length” header.
(http.req.header("Content-Type").exists && http.req.header("Content-Type").eq("text/html")) || (http.req.header("Transfer-Encoding").exists) || (http.req.header("Content-Length").exists)
The payload of a TCP or UDP packet is the data portion of the packet. You can configure Advanced policy expressions to examine features of a TCP or UDP packet, including the following:
- Source and destination domains
- Source and destination ports
- The text in the payload
- Record types
The following expression prefixes extract text from the body of the payload:
HTTP.REQ.BODY(integer). Returns the body of an HTTP request as a multiline text object, up to the character position designated in the integer argument. If there are fewer characters in the body than is specified in the argument, the entire body is returned.
HTTP.RES.BODY(integer). Returns a portion of the HTTP response body. The length of the returned text is equal to the number in the integer argument. If there are fewer characters in the body than is specified in integer, the entire body is returned.
CLIENT.TCP.PAYLOAD(integer). Returns TCP payload data as a string, starting with the first character in the payload and continuing for the number of characters in the integer argument.
Following is an example that evaluates to TRUE if a response body of 1024 bytes contains the string “https”, and this string occurs after the string “start string” and before the string “end string”:
http.res.body(1024).after_str("start_string").before_str("end_string").contains("https")
Note: You can apply any text operation to the payload body. For information on operations that you can apply to text, see Advanced policy expressions: Evaluating text.
About evaluating HTTP and TCP payload | https://docs.citrix.com/en-us/citrix-adc/13/appexpert/policies-and-expressions/advanced-policy-exp-parsing-http-tcp-udp/evaluating-http-and-tcp-payload.html | 2019-06-16T06:36:36 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.citrix.com |
Tracing from the .NET SDK with Couchbase Server
Use tracing from the SDK to discover timeouts and bottlenecks across the network and cluster.
For our example, we will customize the threshold logging tracer settings in the
ClientConfiguration.
By default it will log every minute (if something is found) and only sample the top 10 slowest operations.
The default threshold for key/value operation is 500 milliseconds.
We shall set them so low that almost everything gets logged - not something you should do in production!
var tracer = new ThresholdLoggingTracer { KvThreshold = 1, // 1 microsecond SampleSize = int.MaxValue, Interval = 1000 // 1 second };s from the travel-sample bucket and, if found, write them back with an
upsert - giving us both read and write operations to log.
// Connect var config = new ClientConfiguration(); config.Tracer = tracer; var cluster = new Cluster(); // connects to cluster using localhost cluster.Authenticate("username", "password"); // Load a couple of docs and write them back var bucket = cluster.OpenBucket("travel-sample"); for (var i = 0; i < 5; i++) { var result = bucket.Get<dynamic>("airline_1" + i); if (result.Success) { bucket.Upsert(result.Id, result.Value); } } Thread.Sleep(TimeSpan.FromMinutes(1));
Run the code, and you will see something like the following in the logs:
27 11:51:09,129 [4] INFO Couchbase.Tracing.ThresholdLoggingTracer - Operations that exceeded service threshold: [ { "service":"kv", "count":5, "top":[ { "operation_name":"Set", "last_operaion_id":"0x9", "last_local_address":"10.211.55.3:58679", "last_remote_address":"10.112.181.101:11210", "last_local_id":"8fe25176f5cb5068/7a424b9f2ab5e6a8", "last_dispatch_us":31153, "total_us":39299, "encode_us":6791, "dispatch_us":31153, "server_us":17290, "decode_us":909 }, { "operation_name":"Get", "last_operaion_id":"0xa", "last_local_address":"10.211.55.3:58678", "last_remote_address":"10.112.181.101:11210", "last_local_id":"8fe25176f5cb5068/1ca98582755c6f19", "last_dispatch_us":18205, "total_us":23802, "encode_us":7, "dispatch_us":18205, "server_us":280, "decode_us":1653 }, { "operation_name":"Get", "last_operaion_id":"0xb", "last_local_address":"10.211.55.3:58679", "last_remote_address":"10.112.181.101:11210", "last_local_id":"8fe25176f5cb5068/7a424b9f2ab5e6a8", "last_dispatch_us":1657, "total_us":1830, "encode_us":3, "dispatch_us":1657, "server_us":135, "decode_us":29 }, { , "encode_us":2, "dispatch_us":1373, "server_us":22, "decode_us":12 }, { "operation_name":"Get", "last_operaion_id":"0xd", "last_local_address":"10.211.55.3:58679", "last_remote_address":"10.112.181.101:11210", "last_local_id":"8fe25176f5cb5068/7a424b9f2ab5e6a8", "last_dispatch_us":4876, "total_us":5086, "encode_us":8, "dispatch_us":4876, "server_us":14, "decode_us":8 } ] } ], "dispatch_us":1373, "server_us":22, "decode_us":12 }
This tells us the following:
operation_name: The operation type, eg for KV it is the command type 'Get'.
last_operation_id: The last unique ID for the opeation (in this case the opaque value), useful for diagnosing and troubleshooting in combination with the last_local_id.
last_local_address: The local socket used for this operation.
last_remote_address: The remote socket on the server used for this operation. Useful to figure out which node is affected.
last_local_id: With Server 5.5 and later, this id is negotiated with the server and can be used to correlate logging information on both sides in a simpler fashion.
last_dispatch_us: The time when the client sent the request and got the response took around 1 millisecond.
total_us: The total time it took to perform the full operation: here around 1.5 milliseconds.
dispatch_us: The amount of time observed between sending the request over the network to when a response was received.
server_us: The server reported that its work performed took 22 microseconds (this does not include network time or time in the buffer before picked up at the cluster).
decode_us: Decoding the response took the client 12 microseconds.
You can see that if the thresholds are set the right way based on production requirements, without much effort slow operations can be logged and pinpointed more easily than before.
Timeout Visibility.
Previously, when an operation takes longer than the timeout specified allows, a
TimeoutException is thrown.
It usually looks like this:.
2018-06-27 12:26:13,755 [Worker#STA_NP] WARN Couchbase.IO.ConnectionBase - Couchbase.IO.SendTimeoutExpiredException: The operation has timed out. {"s":"kv","i":"0x1","c":"8d063f1e0b70ebb8/cd65184a378ae5fc","b":"travel-sample","l":"10.211.55.3:58754","r":"10.112.181.101:11210","t":5000} at Couchbase.IO.MultiplexingConnection.Send(Byte[] request) at Couchbase.IO.Services.IOServiceBase.Execute[T](IOperation`1 operation, IConnection connection) at Couchbase.IO.Services.IOServiceBase.EnableServerFeatures(IConnection connection) at Couchbase.IO.Services.IOServiceBase.CheckEnabledServerFeatures(IConnection connection) at Couchbase.IO.Services.PooledIOService..ctor(IConnectionPool connectionPool) at Couchbase.IO.Services.SharedPooled)
Now the timeout itself provides you valuable information like the local and remote sockets, and the operation id, as well as the timeout set and the local ID used for troubleshooting. You can take this information and correlate it to the top slow operations in the threshold log.
The
TimeoutException now provides you more information into what went wrong and then you can go look at the log to figure out why it was slow. | https://docs.couchbase.com/dotnet-sdk/2.7/tracing-from-the-sdk.html | 2019-06-16T05:21:04 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.couchbase.com |
Create customized expressions if none of the predefined expressions meet your requirements.
Expressions are a powerful string-matching tool. Ensure that you are comfortable with expression syntax before creating expressions. Poorly written expressions can dramatically impact performance.
When creating expressions:
Refer to the predefined expressions for guidance on how to define valid expressions. For example, if you are creating an expression that includes a date, you can refer to the expressions prefixed with "Date".
Note that a Trend Micro product follows the expression formats defined in Perl Compatible Regular Expressions (PCRE). For more information on PCRE, visit the following website:
Start with simple expressions. Modify the expressions if they are causing false alarms or fine tune them to improve detections.
There are several criteria that you can choose from when creating expressions. An expression must satisfy your chosen criteria before a Trend Micro product subjects it to a DLP policy. For details about the different criteria options, see Criteria for Customized Expression. | http://docs.trendmicro.com/en-us/enterprise/control-manager-60-service-pack-3/ch_ag_policy_mgmt/dlp_abt/digital_asset_definitions/dac_expressions/dac_expressions_customized.aspx | 2019-06-16T04:46:28 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.trendmicro.com |
Topics covered in this article:
- Trouble connecting to NetSuite/Connecting to the wrong NetSuite instance.
- Switching Ownership of a NetSuite Connection
- Change the Default Payable Account for Reimbursable Expenses
- Multi-Currency
- Tax
- Exporting Negative Expenses
- Company Card Expenses Exporting to the Wrong Account
- Reports Exporting as 'Accounting Approved' Instead of 'Paid in Full'
- Reports Exporting as 'Pending Approval'
- Not Pulling in All Customers
- Not Pulling in Categories
- Why Can't I Export Without a Category Selected?
- Where Can I Find My Expenses?
- Downloading NetSuite logs to assist with troubleshooting
Trouble connecting to NetSuite/Connecting to the wrong NetSuite instance
If you keep seeing errors regarding your NetSuite credentials or if you are trying to connect to production and it keeps sending you to the sandbox, check out this video for steps on how to get connected.
Connection Troubleshooting Video
Switching Ownership of a NetSuite Connection
When a user who "owns" the NetSuite connection (in the screenshot below, the owner is [email protected]) leaves your company, please follow these steps to switch to a new user:
1. Click Configure and make sure that you have saved your sync option settings somewhere for you to reference later on. This is really important!
2. Click on the Do not connect to NetSuite radio button. You will be asked if you want to disconnect; click Disconnect.
3. Click the Connect to Netsuite radio button.
4. Enter the email for the admin who will be taking over the account as the NetSuite User ID. Enter the NetSuite Account ID (which you get in NetSuite by going to Setup > Integration > Web Services Preferences).
5. Click on the Create a new NetSuite Connection button. You will be asked if you have completed the prerequisites, which you will have at this point. Click Continue.
6. You'll be directed to the NetSuite SSO page. Enter the email from step #4 and NetSuite password for that account.
7. You'll be directed to the below page in NetSuite. Click on View all roles and make sure that you are signed in under the Administrator role. Once you have confirmed this, click Sign out.
8. You'll be redirected back to Expensify, where you will setup the sync options using the previous settings that you noted in step 1; then save the connection.
If you run into any issues with the above, here are some additional troubleshooting steps:
- In NetSuite, go into the role of the current connection "owner"
- Click Edit > Access > Select any role other than Administrator > Save
- Click Edit > Access > Select Administrator role > Save
- Then go back and repeat steps #1-8 above.
Change the Default Payable Account for Reimbursable Expenses
NetSuite is set up with a default payable account that will be credited each time reimbursable expenses are exported as Expense Reports to NetSuite (as supervisor and accounting approved). In some cases you may want to change this to credit a different account.
To do this:
Navigate to Setup > Accounting > Accounting Preferences in NetSuite
Click Time & Expenses tab
Under Expenses section, there is a field for Default Payable Account for Expense Reports, choose the preferred account.
Click Save.
Multi-Currency
While we do support multi-currency with NetSuite, there are a few limitations.
The currency of the vendor record/employee record needs to match the currency on the subsidiary selected in your configuration in Expensify.
When expenses are created in one currency, and converted to another currency in Expensify before export, we do have the option to send over both original and output currencies for expense reports. This feature, called Export foreign currency amount, is found in the Advanced tab of your configuration.
If you export the foreign currency, we send over only the amount. The actual conversion is done in NetSuite.
During the bill payment sync, the bank account currency must match the subsidiary currency or you will receive an Invalid Account error.
Tax
When using a non-US version of NetSuite that requires tax tracking, we import tax groups (not tax codes) directly from NetSuite to apply to your expenses in Expensify.
For some locations, NetSuite has a default list of tax groups available but you can add your own and we will import those as well. DO NOT inactivate the NetSuite defaults as we use specific ones to export certain types of expenses.
For instance the Canadian default tax group CA-Zero is used when we export mileage and per diem expenses that don't have tax applied to them within Expensify. Not having this group active in NetSuite will cause export errors.
Some tax nexuses in NetSuite also have a couple of settings that have to be set to work correctly with the Expensify integration.
The field Tax Code Lists Include needs to be set to either Tax Groups or Tax Groups and Tax Codes.
The Tax Rounding Method field needs to be set to Round Off. While this won't cause errors with the connection, it can cause a difference in the amounts exported to NetSuite.
For more information on our tax tracking feature click here.
If your tax groups are importing into Expensify but not exporting back to NetSuite, make sure the each tax group has the correct subsidiaries enabled.
Exporting Negative Expenses
You can export negative expenses to NetSuite successfully as long as the total of the report itself is positive. Expense Reports, Vendor Bills, and Journal Entries all require a positive report total in order to export._12<<<<
Reports Exporting as 'Accounting Approved' Instead of 'Paid in Full'
This can occur for two reasons:
Cause: This error occurs when locations, classes, or departments are required in your accounting classifications but on the preferred bill payment form but are not marked as 'Show'.
Solution: In NetSuite, please go to Customization > Forms > Transaction Forms and find your Preferred (checkmark) Bill Payment form. Please Edit or Customize this form and under the Screen Fields > Main tab check 'show' near the department, class, and location options.
Cause: This can also be caused if the incorrect settings are applied in the policy configurations in Expensify.
Solution: Ensure that the Expensify policy configuration settings for NetSuite are:
- Config > Advanced > Sync Reimbursed Reports enabled and payment account chosen
- Config > Advanced > Journal Entry Approval Level = Approved for Posting
- Config > Advanced > A/P Approval Account - Must match the current account being used for bill payment.
To make sure the A/P Approval Account matches the account in NetSuite, go into your bill/expense report that is causing the error and click ‘Make Payment’
This account needs to match the account selected in your Expensify configuration.
Please make sure that this is also the account selected on the expense report by looking at the expense report list:
Reports Exporting as 'Pending Approval'
Reports exporting as Pending Approval rather than Approved for posting: Change approval preferences in NetSuite
Journal Entries/Vendor Bills
- In NetSuite, navigate to Setup > Accounting > Accounting Preferences
- On the "General" tab, uncheck "Require Approvals on Journal Entries" and
- On the "Approval Routing" tab, uncheck Journal Entries/Vendor Bills to remove the requirement for approval for Journal Entries created in NetSuite. Please note that this will apply to all Journal Entries created, not just those that are created by Expensify.
Expense Report
- In NetSuite, navigate to Setup > Company > Enable Features
- On the "Employee" tab, uncheck "Approval Routing" to remove the requirement for approval for Expense Reports created in NetSuite. Note that this selection will apply to purchase orders as well.
Not Pulling in All Customers
If only part of your customers list is importing from NetSuite to Expensify you'll need to make sure your page size is set to 1000 for importing your customers and vendors. Go to Setup > Integration > Web Services Preferences > 'Search Page Size'
Not Pulling in Categories
If you're having trouble importing your Categories, first make sure that they are setup in NetSuite as actual Expense Categories, not just General Ledger accounts.
- Logged into NetSuite as as administrator, go to Setup > Accounting > Expense Categories. A list of Expense Categories should be available.
- If no Expense Categories are visible click on "New" to create new Expense Categories.
If you have confirmed that your categories are setup as Expense Categories in NetSuite and they still aren't importing to Expensify, make sure that the subsidiary of the Expense Category matches the subsidiary selected in your connection settings:
Why Can't I Export Without a Category Selected?
When connecting to NetSuite, the chart of accounts is pulled in to be used as categories on expense. Each expense is required to have a category selected within Expensify in order to export.
Each category also has to be imported in from NetSuite, NetSuite!
Downloading NetSuite logs to assist with troubleshooting
Connection and export issues that cannot be resolved using our error guides may require additional information which can be contained in the NetSuite logs of each connection and export attempt.
To download the logs:
1. Search for Web Services Usage Log in NetSuite's global search
2. Set the filters for the date and time range of when the connection or export occurred
3. Set the filters for Document type, Action and/or Integration
For connecting your subsidiary to Expensify select Action = ssoLogin
For exported reports select Record Type = expenseReport/vendorBill/journalEntry and Action = Add
4. When you locate the action, scroll to the right of the screen and to the Request and Response columns. Click the View link under each which will download a log file which you can share with our support team at [email protected]
Still looking for answers? Search our Community for more content on this topic! | https://docs.expensify.com/articles/615117-netsuite-faq | 2019-06-16T04:40:29 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['https://downloads.intercomcdn.com/i/o/56543712/9cf433403189cad97b9285e2/File1490214442690.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/56544001/07e900405480ef4abe8cce76/File1490214505443.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/56544060/b460c5aadcb6f98112d9739b/File1490214538598.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/56544354/32b777a1fda86b4c8db4e0e4/A.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/56544389/2792909e2e9841dfa6b6805e/B.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/56544483/fb10b11930eed998e824bd9e/clipboard_720.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/56544653/1254ac1dfea3f63c0e669bb1/Expensify_-_Policy_Editor-2.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/56544670/03c323c58d72a1c90bc3f803/Vendor_-_NetSuite__Expensify_Coaches_.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/56544683/fc2183a88a3a18ba10b09361/Employee_-_NetSuite__Expensify_Coaches_.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/56544757/fcdbeddcefcc034917224ab9/Expensify_-_Policy_Editor-2.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/56544881/69f22a5a3650e0e41e655870/Tax_Groups_-_NetSuite__Expensify_Coaches_.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/56544932/4deb540515653c4eab0f1e65/Set_Up_Taxes_-_NetSuite__Expensify_Coaches_.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/57122298/8841c83e4b279f420a9f7c1c/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/56545037/aba111e360877c05b811b69e/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/56545103/30ac413c85c16e383d25dd8b/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/65396573/eb2cbf85dd3a4dd7018fcd7d/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/56545268/6a3d684934f18e676bfd3a69/File1490216514805.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/56545323/d7d3809a8306c547fe4d98fa/File1490216623943.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/56545361/1c75f5aefabb20be5d6fb10e/File1490216685984.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/56545571/adbe4f2252b524af91eec66e/29140667-468a2c88-7d19-11e7-86ae-2a41dab05a20-1.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/78198433/631aedc2a82025a0a9f26f77/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/56545914/34f7b624c9f9bcf3ce80bfc5/Cortney_Expenses_-_Report.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/66966595/ea144bdcf06e78c2d483301c/2018-07-09_14-36-40.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/56546083/58a72f3892b44f06f3782743/Expensify_-_Policy_Editor.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/59784330/4f5231681a2807313459595a/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/59784502/871489cf4c833e1c75d80626/image.png',
None], dtype=object) ] | docs.expensify.com |
[−][src]Crate obofoundry
Structures to deserialize OBO Foundry listings into.
This library provides structs that can make use of
serde_yaml and
serde_json to deserialize the table of ontologies provided by the
OBO Foundry. It provides a safe and efficient alternative to manually
parsing the obtained JSON/YAML, which is actually quite tricky since
there is no actual scheme available. Use the
Foundry
type as an entry point for deserialization.
Example
Download the
ontologies.yml table from the OBO Foundry and use it to
extract all PURLs to ontologies in the OBO format:
extern crate obofoundry; extern crate reqwest; extern crate serde_yaml; use std::io::Read; const URL: &'static str = ""; fn main() { let mut res = reqwest::get(URL).expect("could not get YAML file"); let mut yml = String::new(); res.read_to_string(&mut yml).expect("could not read response"); let foundry: obofoundry::Foundry = serde_yaml::from_str(&yml).unwrap(); for ontology in &foundry.ontologies { for product in &ontology.products { if product.id.ends_with(".obo") { println!("{} - {}", product.id, product.ontology_purl) } } } }
Changelog
This project adheres to Semantic Versioning and provides a changelog in the Keep a Changelog format. | https://docs.rs/obofoundry/0.2.0/obofoundry/ | 2019-06-16T05:15:45 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.rs |
Educloud Alliance Technical Documentation
Educloud Alliance is creating and maintaining a standard which defines how it is possible to build a ecosystem of users, developers and service providers, which brings together learning management systems, content management systems, content producers, administrative systems and all other services to make it possible for everybody to freely discover, choose, buy, compare and use everything.
Principles
Principles are the key guidance when contributing and defining the standard. By following the principles one can help the standard to meet the goals set.
- User point of view.
- The main principle of the standard is to recognize and appreciate the goals of users. By understanding the motivations and rationales behind users’ actions it is possible to build environments that fulfill users’ needs and bring joy of use to their daily life.
- Everything is open and free.
- Everything is open and free for everyone to use everywhere.
- Interoperability.
- Not only people, but also services can discuss with each other, change information and thus build knowledge. Make it possible for every other service in the ecosystem to connect with your service. And bring more value by connecting your service to other services.
- Be committed.
- Ecosystem includes various actors with freedom to build and maintain their own services for their own purposes. With this freedom comes the responsibility to be committed to follow the ECA standard.
- Excellence.
- Create and contribute to a believable and viable standard which is easy to understand, implement and it convinces as many people as possible.
- Use existing.
- Use as much as possible existing documentation, interfaces, and field tested technologies. Make your own contribution reusable.
- Offer reference implementation.
- Reference implementation with written documentation is the ultimate proof that the idea works and is possible to implement.
RFC 2119 is used to define common vocabulary for requirements in the documentation.
Structure of documentation
Documentation consists of four levels presented in the image: Stories, Services, Interfaces and Infrastructure.
Stories are the highest level of documentation. Stories do not describe any technical solution, but focus on explaining the rationale. Why something needs to be done, and how it should work from the users’ perspective. A story does not describe a service as such, but the user, the usage environment, and the usage flow.
Services documentation tells what functionality is expected from a service, and how it relates to the stories. Services must work together to form functionality described in the stories. Services should adhere to common user experience guidelines which ensure that the user feels safe and knows the path between services.
A reference implementation should be included for testing the interfaces and related services built according to the documentation. This reference implementation is part of standard’s documentation, and available for all parties.
Interfaces are there for creating pleasing user experiences when several services are needed to meet the goals of the user. Interfaces enable service-to-service communication and data sharing, which helps creating seamless experiences to the user.
Infrastructure documentation is the lowest level of documentation describing the whole service architecture of ECA standard. It also contains the best practices guidance for hosting the services and the system.
Stories
A story shows a snapshot of user’s life with the system. It gives insight into what kind of people will use the system e.g. what kind of teacher, student or other person from the school world. And why they are using it. The key is to know your users!
The story tells the user’s problem or the goal, which the user wants to achieve. From the story, reader gets an idea of how the system can help the user and what kind of features there are. But it does not specifically tell what technology is used.
The story takes place in the right context i.e. for example in an environment such as school class, home office, or the bus, and describes the tools and capabilities e.g. tablet, mobile phone, limited network access. If relevant to the system, story may also tell about social context i.e. situations and communication of people with each other, or with some other systems.
The context of use, characteristics of the user and the goals help to identify design requirements for the service. From the story you can see, what other services you may need in order to fulfill user’s needs properly.
Stories do not tell you what kind of user interface you should build, or what kind of technical solutions should be used, as it leaves them open. Only interest is on whether the story is met with the final product or not, regardless of means.
For the user the whole system should look and feel coherent and there should not be inconsistencies or places where the actor does not know where in the overall system she currently is.
Following stories are identified to be the core stories of ECA’s standard.
- Single Sign-on
- Authentication and identification of the user should happen only once when she begins the session. Moving between services should be seamless.
- Using learning materials
- User must be able to use any material the way she wants.
- Procurement and license management
- Procurement of material and services should be easy and fair to everybody.
- Curriculum
- User should always know that what she is doing is in line with the curriculum.
- Analytics and feedback loops
- User should see her progress in real time and it must be possible to build feedback loops in all levels of the system.
Services
The standard is based on a service oriented architecture where functionality is split to services. The services defined in the standard are implementing the stories defined in the standard.
The standard must be accompanied by reference implementation which shows in practice how the standard is meant to be working. The reference implementation is not meant to be production system and it is not designed as such.
It must be possible to have multiple instances of all services when in production. It is up to the production system to define how many instances of different services are available to the users.
User authentication, identification and profile data
Authentication is considered separate from other services. All services need to know something about the user. Different services need different data about the user, but all of them need to authenticate and/or identify the user in some way.
These are the services which together form the basis of authentication and identification of users.
- Auth Proxy
- Common interface for services to use different Auth Sources. Provides single sign-on for services.
- Auth Source
- Authenticates the user when the user wants to open a session in one of the services. Auth Sources are handled by the Auth Proxy.
- Connector
- Connects user authentication source and user identity together. This makes it possible for the user to identify with multiple authentication sources and still have only one identity. Only the authentication source knows the credentials for the user.
- Data
- Common source of user data to all other services. Mainly used by the connector to query users and store the connection between authentication source and user identity.
Learning material
Handling learning material is focused in three key service types. Learning material is produced by the CMS and used in the LMS. Bazaar is mediating between them and allowing many-to-many connections freely between them.
- Bazaar
- Service which lets the user to browse and buy material from CMS to LMS.
- Recipes
- Service which builds collections of learning materials.
- Learning management system
- Service which consumes the content produced by the Content management systems.
- Content management system
- Service which produces content in some form.
Interfaces
The interfaces can be thought of as a highway of information which is flowing between services.
Services are publishing interfaces for other services to use. Interface definitions must be open to everybody, but the use and authorization of interface access is defined runtime by the services.
Services may have more interfaces than what is defined in the standard. These interfaces are not bound by the standard.
Infrastructure
The standard would not be complete without defining how the system as a whole is working and how the reference implementation is built. The production system can be different.
Contributions
If you want to contribute to ECA put your contributions in the open and begin the discussion how your contribution could benefit ECA and everybody else.
Read more about contributions. | http://docs.educloudalliance.org/en/latest/ | 2019-06-16T04:42:54 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['_images/standard.png', '_images/standard.png'], dtype=object)
array(['_images/services.png', '_images/services.png'], dtype=object)
array(['_images/bus.png', '_images/bus.png'], dtype=object)] | docs.educloudalliance.org |
In Task Master R3.0 and higher, holidays can be excluded from task scheduling. To exclude holidays, first configure a SharePoint Calendar list with the holidays observed by your project team. Holidays must be configured as Yearly events; Task Master will ignore any other events on the Calendar.
After your holiday list is created, reference the list in Task Master:
Task Master is configured with a holiday list that includes a 2-day Bank Holiday on July 22nd and July 23rd. Task Duration is calculated based on Start Date and Due Date.
Create a new task with a Start Date and Due Date as shown below. Notice that the Due Date falls on the holiday.
- Start Date and Time: July 8, 2013 at 8 AM
- Due Date and Time: July 22, 2013 at 5 PM
Click the Recalculate button on the toolbar. The Start and Due dates and times of the task become:
- Start Date and Time: July 8, 2013 at 9 AM
- Due Date and Time: July 19, 2013 at 5 PM
Since Monday, July 22nd and Tuesday, July 23rd are holidays in this example, Task Master ensures that your tasks are not scheduled on those days. The task Due Date is moved to the previous workday which is not a holiday.
Return to: Task Master Working Hours Settings | https://docs.bamboosolutions.com/document/configure_task_master_holiday_list/ | 2019-06-16T05:55:05 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['/wp-content/uploads/2017/06/HolidayTask.jpg', 'HolidayTask.jpg'],
dtype=object)
array(['/wp-content/uploads/2017/06/HW45_Config_WorkHours_008.png',
'HW45_Config_WorkHours_008.png'], dtype=object) ] | docs.bamboosolutions.com |
When we export a flow, credentials of any service integration that is used in the flow are also exported by default. For example, if we have a flow that uses Salesforce for authentication, our Salesforce credentials will also get exported as part of the flow.
If we want to export the flow, but not the service credentials, here is how we can do it:
- Publish the flow.
- Note the Flow ID. If we want to export a specific version of the flow, we note the Flow Version ID as well. (If the Version ID parameter is not provided, the latest version of the flow will get exported).
- Click Home and select API.
This opens the Flow API editor.
- In the URL field, copy-paste api/package. This pulls up a list of API endpoints.
- Select the api/package/1/flow/{id}?nullPasswords={null_passwords} endpoint.
If we want to export a specific version of the flow, we select api/package/1/flow/{id}/{version_id}?nullPasswords={null_passwords}.
- Replace {id} with the Flow ID that we copied. (We will replace {version_id} with the Version ID of the flow if we are exporting a version of the flow.)
- Replace {null_passwords} with true
- Click GET. This populates the Response column with a response body containing a string with the flow and all its dependencies as a package. We also get an alert message that says GET to api/package/1/flow/c4b273fd-231b-49f2-b1f2-843f3a768dc6?nullPasswords=true completed successfully.
- Copy the text from the Response column, paste it to a text editor, and save it.
We can import this file to a tenant, and populate it with new service credentials. | https://docs.manywho.com/exporting-a-flow-without-service-password/ | 2019-06-16T05:24:56 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.manywho.com |
By default, the connectionless User Datagram Protocol (UDP) is used to send Communicator Heartbeat from managed product to the Control Manager server.
Making incorrect changes to the configuration file can cause serious system problem. Back up TMI.cfg to restore your original settings.
Set all TMI.cfg in your Control Manager network (server and agents) to the same security level value (AllowUDP). Otherwise, the server and agent communication will not work. | http://docs.trendmicro.com/en-us/enterprise/control-manager-60/ch_ag_agent_mgmt/tmcm_ext_com_port_modify_about/communicator_heartbeat_modify.aspx | 2019-06-16T04:33:39 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.trendmicro.com |
The OfficeScan client can build the digital signature and on-demand scan cache files to improve its scan performance. When an on-demand scan runs, the OfficeScan client first checks the digital signature cache file and then the on-demand scan cache file for files to exclude from the scan. Scanning time is reduced if a large number of files are excluded from the scan. | http://docs.trendmicro.com/en-us/enterprise/control-manager-60/ch_policy_templates/osce_client/client_priv_sett_all/sc_pv_cache.aspx | 2019-06-16T04:55:00 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.trendmicro.com |
Install plugins
To activate a plugin, follow these steps:
- Log in to the application as an administrator.
- Select the “Editing and Plugins” button.
- Select the “Plugins” tab.
- On the resulting page, find the plugin(s) you wish to activate and click the “Enable” checkbox.
Click the “Apply” button to save your changes. | https://docs.bitnami.com/oci/apps/tiki-wiki-cms-groupware/configuration/install-plugins/ | 2019-06-16T06:25:45 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.bitnami.com |
Contents Now Platform Capabilities Previous Topic Next Topic Survey scorecard average ratings Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Survey scorecard average ratings The Average Ratings view displays the weighted average rating for each survey question in a category. scorecard as an imageRelated ReferenceSurvey scorecard category resultsSurvey scorecard question resultsSurvey scorecard history On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/administer/survey-administration/reference/r_SurveyScorecardAverageRatings.html | 2019-06-16T05:21:48 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.servicenow.com |
Start Outbreak Prevention Mode to apply the policy that corresponds to the virus outbreak. After Control Manager has entered Outbreak Prevention Mode, you can evaluate product-setting recommendations from Trend Micro and modify them to suit your network. Policies implement product settings that block known virus-entry points.
When TrendLabs deploys an Outbreak Prevention Policy, it is very likely that they are still testing the appropriate virus pattern. The Outbreak Prevention Policy settings, therefore allow you to protect your network during the critical period before TrendLabs releases a new pattern.
Before you start Outbreak Prevention Mode, set outbreak recipients and the notification method in the Event Center. | http://docs.trendmicro.com/en-us/enterprise/control-manager-60-service-pack-1/ch_ag_tm_services/opm_about/opm_step3_start.aspx | 2019-06-16T05:30:49 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.trendmicro.com |
Topics covered in this article:
- Customers Not Showing as Tags
- Credit Card Expenses Say 'Credit Card Misc.' Instead of the Merchant
- Categories Aren't Importing/Showing in Expensify
- Company Card Expenses Exporting to the Wrong Account
- Multi-Currency
- Tax
- Exporting Negative Expenses
- Why Can't I Export Without a Category Selected?
- Where Can I Find My Expenses?
Customers Not Showing as Tags
Xero only pulls in customers from the Xero contact list, not just contacts. In order for a contact to become a "customer" in Xero, it will need to have an invoice associated with it. If this is the first time a customer has been used in Xero, you can create a fake invoice for that customer, void it out, and then re-sync your connection to Xero in Expensify in order to then pull in this new "customer".
To do this, in Xero, click into the contact, then select New > Sales Invoice:
Fill out the few required pieces of information, save the invoice, and then go ahead and void it. Click Invoice Options > Void.
Your customer will now show up as a customer. Re-sync your Xero connection in Expensify and you should then see the customers appear in your tag list.
Credit Card Expenses Say 'Credit Card Misc,' Instead of the Merchant
Where the merchant in Expensify is an exact match to a contact you have set up in Xero then exported credit card expenses will show the vendor name. If not we use the the default name Expensify Credit Card Misc. This is done to prevent multiple variations of the same contact (e.g. Starbucks and Starbucks #1234 as is often seen in credit card statements) being created in Xero.
To change merchant names to match your vendor list in Xero, we recommend using our Expense Rules feature. More information on this can be found here.
Categories Aren't Importing/Showing in Expensify
First, if your categories are showing up correctly in the policy editor in Expensify, then you will need to make sure that they are not disabled. If they are disabled, your employees will not be able to use these categories in Expensify. So, just enable all of the categories that you want your employees to use.
Second, make sure your expense categories have actually been imported from Xero, and have not been manually created in Expensify. In the example below, the category with the Xero icon has been imported from Xero, while the category without the Xero icon has not been imported from Xero, and should be disabled (as using it will prevent a report from exporting).
Third, to display a Xero category in Expensify, it will need to be an Expense accounttype, or have Show in Expense Claims enabled prior to syncing your Xero connection in Expensify. individual GL accounts in Domain Control, you still need to select the make sure you have selected an account for all other Non-Reimbursable transactions, as well as to define the "default" from Settings > Policies > [Policy Name]> Connections > Configure.
- Settings > Domain Control > [Domain Name] > Company Cards:
> Configure must be a domain admin as well.
If the report exporter is not a domain admin, all company card expenses will export to the default account selected in the Non-Reimbursable section of your Export configuration settings under Settings > Policies > .
Multi-Currency
When using multi-currency in Xero and exporting reimbursable expenses, the bill will use the output currency in your Expensify policy, as long as it is enabled in Xero.
Your general ledger reports will convert to the home currency in Xero using currency exchange rates, as set in Xero.
For non-reimbursable expenses, the bank transactions will use the currency on the bank account in Xero, regardless of the currency in Expensify.
If these currencies do not match, we use a 1:1 exchange rate, so make sure the output currency in Expensify matches the currency on the bank account in Xero. > [Policy Name] > Tax.
The defaults selected on the tax page will be applied to all Expenses but the user will have the option to select a tax rate on each expense.
To learn more about our tax tracking feature and how it is applied to your reports, click here.
Exporting Negative Expenses
With reimbursable expense reports, the total of the report needs to be positive in order to export to Xero successfully. Individual reimbursable expenses can be negative as long as the total of the report itself is positive. Negative non-reimbursableexpenses will export without issue, even if the total of the report is negative.
Why Can't I Export Without a Category Selected?
When exporting to Xero, each expense has to have a category selected.
The selected category has to be imported in from Xero Xero based upon the settings you have configured in the Connections area of the policy settings. You can learn more about how to configure this connection here.
Still looking for answers? Search our Community for more content on this topic! | https://docs.expensify.com/articles/741195-xero-faq | 2019-06-16T05:18:36 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['https://downloads.intercomcdn.com/i/o/103520539/cc77d81cfa7991bcb332f812/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/103517967/9ca8528c8a8986fd3b271556/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/103517838/d50261e255e42185b4931110/image.png',
None], dtype=object) ] | docs.expensify.com |
Call Chain Visualization
Call Chain Visualization provides you an interactive view of the entire script call chain ... the scripts it calls, and those that call it, as far as we can reach. You can reach this visualization by navigating to the script you want to visualize look list of queries specific to that script for "Call Chain Diagram". Select it.
Once you do 2 trees will be drawn below:
- one for upstream scripts from the selected script, and
- one for downstream scripts from the selected script
- Click-and-drag to reposition the chart in the window.
- Scroll/pinch to zoom in and out
- Click on a shaded dot to collapse/expand the child nodes
- Click on the name of a script to drill down into that script in the FMPerception hierarchy. You can drill back up the hierarchy to return to the chart.
- Mouse over a script to see the full name, source file, and the number of times it appears on this chart.
- Scripts that appear more than once will be displayed with a thicker circle, and if you mouse over one of these scripts, every occurrence of that script on the diagram will light up.
- A star will appear next to scripts that run with full access privileges.
- A folder will appear next to scripts that exist in a different file from the file containing the original script.
Because of the possibility of scripts calling themselves (or calling another script that calls the original script), we can't always display all of a script's children, every time it appears on the diagram. So, if a script appears on the diagram more than once, one of those dots will display all of its children. Scripts that appear more than once will be displayed with a thicker circle, and if you mouse over one of these scripts, every occurrence of that script on the diagram will light up. Check it out. It makes it pretty easy to follow the thread of a looping/recursive script sequence without filling the screen with many copies of the same tree.
Here is what it looks like.
| https://docs.fmperception.com/article/557-call-chain-visualization | 2019-06-16T04:39:57 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56f819f89033601eb6736795/images/580284d9c697915a23d79ac0/file-LKpYq2C8xI.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56f819f89033601eb6736795/images/5802859bc697915a23d79ac4/file-Vfu0QgtwfR.png',
None], dtype=object) ] | docs.fmperception.com |
Splunk Enterprise version 5.0 reached its End of Life on December 1, 2017. Please see the migration information.
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
app.conf
The following are the spec and example files for app.conf.
app.conf.spec
# Version 5.0.18 # #. Icons are recommended, although not required. # Screenshots are optional. # # There is no setting in app.conf for these images. Instead, icon and # screenshot images should be placed in the appserver/static dir of # your app. They will automatically be detected by Launcher and Splunkbase. # # For example: # # <app_directory>/appserver/static/appIcon.png (the capital "I" is required!) # <app_directory>/appserver/static/screenshot.png # # An icon image must be a 36px by 36px PNG file. # An app screenshot must be 623px by 350px PNG file. # # is_manageable = true | false * Indicates if Splunk Manager should be used to manage this app * Defaults to true * This setting is deprecated. label = <string> * Defines the name of the app shown in the Splunk GUI and Launcher * Recommended length between 5 and 80 characters. * Must not include "Splunk For" prefix. * Label is required. * Examples of good labels: IMAP Monitor
# Version 5.0.18 # #: 5.0.18
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/Splunk/5.0.18/Admin/Appconf | 2019-06-16T05:15:44 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
5. The Rename Stage¶
The rename stage maps the ISA (or logical) register specifiers of each instruction to physical register specifiers.
5.1. The Purpose of Renaming¶
Renaming is a technique to rename the ISA (or logical) register specifiers in an instruction by mapping them to a new space of physical registers. The goal to register renaming is to break the output- (WAW) and anti-dependences (WAR) between instructions, leaving only the true dependences (RAW). Said again, but in architectural terminology, register renaming eliminates write-after-write (WAW) and write-after-read (WAR) hazards, which are artifacts introduced by a) only having a limited number of ISA registers to use as specifiers and b) loops, which by their very nature will use the same register specifiers on every loop iteration.
5.2. The Explicit Renaming Design¶
Fig. 5.1 A PRF design (left) and a data-in-ROB design (right)
BOOM is an “explicit renaming” or “physical register file” out-of-order core design. A physical register file, containing many more registers than the ISA dictates, holds both the committed architectural register state and speculative register state. The Rename Map Tables contain the information needed to recover the committed state. As instructions are renamed, their register specifiers are explicitly updated to point to physical registers located in the physical register file. [1]
This is in contrast to an “implicit renaming” or “data-in-ROB” out-of-order core design. The Architectural Register File (ARF) only holds the committed register state, while the ROB holds the speculative write-back data. On commit, the ROB transfers the speculative data to the ARF. [2]
5.3. The Rename Map Table¶
Fig. 5.2 The Rename Stage. Logical register specifiers read the Map Table to get their physical specifier. For superscalar rename, any changes to the Map Tables must be bypassed to dependent instructions. The physical source specifiers can then read the Busy Table. The Stale specifier is used to track which physical register will be freed when the instruction later commits. P0 in the Physical Register File is always 0.
The Rename Map Table holds the speculative mappings from ISA registers to physical registers.
Each branch gets its own copy of the rename Map Table. [3] On a branch mispredict, the Map Table can be reset instantly from the mispredicting branch’s copy of the Map Table.
As the RV64G ISA uses fixed locations of the register specifiers (and no implicit register specifiers), the Map Table can be read before the instruction is decoded! And hence the Decode and Rename stages can be combined.
5.3.1. Resets on Exceptions and Flushes¶
An additional, optional “Committed Map Table” holds the rename map for the committed architectural state. If enabled, this allows single-cycle reset of the pipeline during flushes and exceptions (the current map table is reset to the committed Map Table). Otherwise, pipeline flushes require multiple cycles to “unwind” the ROB to write back in the rename state at the commit point, one ROB row per cycle.
5.4. The Busy Table¶
The Busy Table tracks the readiness status of each physical register. If all physical operands are ready, the instruction will be ready to be issued.
5.5. The Free List¶
The Free List tracks the physical registers that are currently un-used and is used to allocate new physical registers to instructions passing through the Rename stage.
The Free List is implemented as a bit-vector. A priority decoder can then be used to find the first free register. BOOM uses a cascading priority decoder to allocate multiple registers per cycle. [4]
On every branch (or jalr), the rename Map Tables are snapshotted to allow single-cycle recovery on a branch misprediction. Likewise, the Free List also sets aside a new “Allocation List”, initialized to zero. As new physical registers are allocated, the Allocation List for each branch is updated to track all of the physical registers that have been allocated after the branch. If a misspeculation occurs, its Allocation List is added back to the Free List by OR’ing the branch’s Allocation List with the Free List. [5] | https://docs.boom-core.org/en/latest/sections/rename-stage.html | 2019-06-16T04:30:39 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['../_images/prf-and-arf.png', 'PRF vs Data-in-ROB design'],
dtype=object)
array(['../_images/rename-pipeline.png', 'The Rename Stage'], dtype=object)] | docs.boom-core.org |
Adding Security
In this lesson you’ll learn how to add security to your Couchbase Mobile application. You’ll implement authentication and define access control, data validation, and access grant policies.
User Authentication
Install Sync Gateway
Users are created with a name/password on Sync Gateway which can then be used on the Couchbase Lite replicator to authenticate as a given user. You can create users by hardcoding the user’s name/password in the configuration file. Create a new file called sync-gateway-config.json with the following.
{ "log": ["HTTP", "Auth"], "databases": { "todo": { "server": "walrus:", "users": { "user1": {"password": "pass", "admin_channels": ["user1"]}, "user2": {"password": "pass", "admin_channels": ["user2"]}, "mod": {"password": "pass", "admin_roles": ["moderator"]}, "admin": {"password": "pass", "admin_roles": ["admin"]} }, "roles": { "moderator": {}, "admin": {} } } } }
Try it out
-
Unzip the file and locate the executable at ~/Downloads/couchbase-sync-gateway/bin/sync_gateway.
Start it from the command-line with the config file.
$ /path/to/sync_gateway sync-gateway-config.json
PS 'C:\Program Files (x86)\Couchbase\sync_gateway.exe' sync-gateway-config.json
Replications with Authentication
With Sync Gateway users defined you can now enable authentication on the Couchbase Lite replicator. The code below creates two replications with authentication.
// This code can be found in AppDelegate.swift // in the startReplication(withUsername:andPassword:) method pusher = database.createPushReplication(kSyncGatewayUrl) pusher.continuous = true NotificationCenter.default.addObserver(self, selector: #selector(replicationProgress(notification:)), name: NSNotification.Name.cblReplicationChange, object: pusher) puller = database.createPullReplication(kSyncGatewayUrl) puller.continuous = true()
The
CBLAuthenticator class has static methods for each authentication method supported by Couchbase Lite.
Here, you’re passing the name/password to the
basicAuthenticatorWithName method.
The object returned by this method can be set on the replication’s
authenticator property.
Try it out
Set
kSyncEnabledand
kLoginFlowEnabledto
truein AppDelegate.swift.
let kSyncEnabled = true let kLoginFlowEnabled = true
Build and run.
Now login with the credentials saved in the config file previously (user1/pass) and create a new list. Open the Sync Gateway Admin UI at, the list document is successfully replicated to Sync Gateway as an authenticated user.
Access Control
In order to give different users access to different documents, you must write a sync function. The sync function lives in the configuration file of Sync Gateway. It’s a JavaScript function and every time a new document, revision or deletion is added to a database, the sync function is called and given a chance to examine the document.
You can use different API methods to route documents to channels, grant users access to channels and even assign roles to users. Access rules generally follow the order shown on the image below: write permissions, validation, routing, read permissions.
Document Types
The Sync Function takes two arguments:
doc: The current revision being processed.
oldDoc: The parent revisions if it’s an update operation and
nullif it’s a create operation.
Each document type will have different access control rules associated with it. So the first operation is to ensure the document has a type property. Additionally, once a document is created, its type cannot change. The code below implements those 2 validation rules.
function(doc, oldDoc){ /* Type validation */ if (isCreate()) { // Don't allow creating a document without a type. validateNotEmpty(type, doc.type); } else if (isUpdate()) { // Don't allow changing the type of any document. validateReadOnly(type, doc.type, oldDoc.type); } if (getType() == "task-list") { /* Write access */ /* Validation */ /* Routing */ /* Read Access */ } function getType() { return (isDelete() ? oldDoc.type : doc.type); } function isCreate() { // Checking false for the Admin UI to work return ((oldDoc == false) || (oldDoc == null || oldDoc._deleted) !isDelete()); } function isUpdate() { return (!isCreate() !isDelete()); } function isDelete() { return (doc._deleted == true); } function validateNotEmpty(key, value) { if (!value) { throw({forbidden: key + is not provided.}); } } function validateReadOnly(name, value, oldValue) { if (value != oldValue) { throw({forbidden: name + "is read-only."}); } } // Checks whether the provided value starts with the specified prefix function hasPrefix(value, prefix) { if (value prefix) { return value.substring(0, prefix.length) == prefix } else { return false } } }
As shown above, you can define inner functions to encapsulate logic used throughout the sync function. This makes your code more readable and follows the DRY principle (Don’t Repeat Yourself).
Try it out
Open the Sync menu on the Admin UI.
Copy the code snippet above in the Sync Function text area.
Click the Deploy To Server button. It will update Sync Gateway with the new config but it doesn’t persist the changes to the filesystem.
Add two documents through the REST API. One with the
typeproperty and the second document without it. Notice that the user credentials (user1/pass) are passed in the URL.
curl -vX POST '' \ -H 'Content-Type: application/json' \ -d '{"docs": [{"type": "task-list", "name": "Groceries"}, {"names": "Today"}]}'
The output should be the following:
[ { "id": "e498cad0380e30a86ed5572140c94831", "rev": "1-e4ac377fc9bd3345ddf5892b509c4d79" }, {error:forbidden,reason:type is not provided.,status:403} ]
Write Permissions
Once you know the type of a document, the next step is to check the write permissions.
The following code ensures the user creating the list document matches with the
owner property or is a moderator.
/* Write Access */ var owner = doc._deleted ? oldDoc.owner : doc.owner; try { // Moderators can create/update lists for other users. requireRole(moderator); } catch (e) { // Users can create/update lists for themselves. requireUser(owner); }
When a document is deleted the user properties are removed and the
\_deleted: true property is added as metadata.
In this case, the sync function must retrieve the type from oldDoc.
In the code above, the
getType inner function encapsulates this logic.
Similarly, the owner field is taken from oldDoc if doc is a deletion revision.
The
requireUser and
requireRole functions are functionalities built in Sync Gateway.
Try it out
Open the Sync menu on the Admin UI.
Copy the changes above in the Sync Function text area to replace the
/* Write access */block.
Click the Deploy To Server button. It will update Sync Gateway with the new config but it doesn’t persist the changes to the filesystem.
Add two documents through the REST API. The request is sent as a user (user1/pass). One document is a list for user1 and another is a list for user2.
curl -vX POST '' \ -H 'Content-Type: application/json' \ -d '{docs: [{type: task-list, owner: user1}, {type: task-list, owner: user2}]}'
The response should be the following:
[ {id:8339356c8bb6d8b32477e931ce04c5c9,rev:1-39539a8ec6ddd252d6aafe1f7e3efd9a}, {error:forbidden,reason:wrong user,status:403} ]
The list with user2 as the owner is rejected.
Validation
After write permissions, you must ensure the document has the expected schema. There are different types of validation such as checking for the presence of a field or enforcing read-only permission on parts of a document. The code below performs various schema validation operations.
/* Validation */ if (!isDelete()) { // Validate required fields. validateNotEmpty(name, doc.name); validateNotEmpty(owner, doc.owner); if (isCreate()) { // Validate that the _id is prefixed by owner. if (!hasPrefix(doc._id, doc.owner + .)) { throw({forbidden: task-list id must be prefixed by list owner}); } } else { // Don’t allow task-list ownership to be changed. validateReadOnly(owner, doc.owner, oldDoc.owner); } }
validateNotEmpty and
validateReadOnly are inner functions to encapsulate common validation operations.
Try it out
Open the Sync menu on the Admin UI.
Copy the changes above in the Sync Function text area to replace the
/* Validation */block.
Click the Deploy To Server button. It will update Sync Gateway with the new config but it doesn’t persist the changes to the filesystem.
Challenge: Persist documents using curl until it gets persisted and Sync Gateway returns a 201 Created status code.
Routing
Once you have determined that the schema is valid you can route the document to channels. A channel is a namespace for documents specifically designed for access control. The code below routes the document to its own list channel.
/* Routing */ // Add doc to task-list's channel. channel(task-list. + doc._id); channel(moderators);
Try it out
Open the Sync menu on the Admin UI.
Copy the changes above in the Sync Function text area to replace the
/* Routing *.
Both documents are saved and mapped to the corresponding channels in the Admin UI.
Read Access
The last step in writing access control rules for a document type is to allow read access to channels. The following code grants the owner and users that are moderators access to the list’s channel.
/* Read Access */ // Grant task-list owner access to the task-list, its tasks, and its users. access(owner, task-list. + doc._id); access(owner, task-list. + doc._id + .users); access(role:moderator, task-list. + doc._id);
Try it out
Open the Sync menu on the Admin UI.
Copy the changes above in the Sync Function text area to replace the
/* Read access *. | https://docs.couchbase.com/tutorials/todo-app/develop/swift/adding-security.html | 2019-06-16T05:33:25 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['../../_images/image15.png', 'image15'], dtype=object)] | docs.couchbase.com |
Symptoms
If you are not using Apple Filing Protocol (AFP) for shares with Apple clients, you can uninstall the packages related to AFP.
Purpose
This:
yum -y remove netatalk
You will get a line similar to the following:
Removed:
netatalk.x86_64 4:3.1.7-0.1.el
The AFP packages are now uninstalled from your SoftNAS.
Outage required: (if applicable)
No
Length of Outage:(if applicable)
Five minutes | https://docs.softnas.com/display/KBS/Removing+AFP+from+SoftNAS | 2019-06-16T04:31:27 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.softnas.com |
Variable Bindings and Name Resolution
In this Appendix, we’ll look at how variables are bound and how names are resolved.
Names can appear in every clause of a query.
Sometimes a name consists of just a single identifier, e.g.,
region or
revenue.
More often a name will consist of two identifiers separated by a dot, e.g.,
customer.address.
Occasionally a name may have more than two identifiers, e.g.,
policy.owner.address.zipcode.
Resolving a name means determining exactly what the (possibly multi-part) name refers to.
It is necessary to have well-defined rules for how to resolve a name in cases of ambiguity.
(In the absence of schemas, such cases arise more commonly, and also differently, than they do in SQL.)
The basic job of each clause in a query block is to bind variables. Each clause sees the variables bound by previous clauses and may bind additional variables. Names are always resolved with respect to the variables that are bound ("in scope") at the place where the name use in question occurs. It is possible that the name resolution process will fail, which may lead to an empty result or an error message.
One important bit of background: When the system is reading a query and resolving its names, it has a list of all the available dataverses and datasets.
As a result, it knows whether
a.b is a valid name for dataset
b in dataverse
a.
However, the system does not in general have knowledge of the schemas of the data inside the datasets; remember that this is a much more open world.
As a result, in general the system cannot know whether any object in a particular dataset will have a field named
c.
These assumptions affect how errors are handled.
If you try to access dataset
a.b and no dataset by that name exists, you will get an error and your query will not run.
However, if you try to access a field
c in a collection of objects, your query will run and return
missing for each object that doesn’t have a field named
c — this is because it’s possible that some object (someday) could have such a field.
Binding Variables
Variables can be bound in the following ways:
WITH and LET clauses bind a variable to the result of an expression in a straightforward way
Examples:
WITH cheap_parts AS (SELECT partno FROM parts WHERE price < 100)binds the variable
cheap_partsto the result of the subquery.
LET pay = salary + bonusbinds the variable
payto the result of evaluating the expression
salary + bonus.
FROM, GROUP BY, and SELECT clauses have optional AS subclauses that contain an expression and a name (called an iteration variable in a FROM clause, or an alias in GROUP BY or SELECT.)
Examples:
FROM customer AS c, order AS o
GROUP BY salary + bonus AS total_pay
SELECT MAX(price) AS highest_price
An AS subclause always binds the name (as a variable) to the result of the expression (or, in the case of a FROM clause, to the individual members of the collection identified by the expression.)
It’s always a good practice to use the keyword AS when defining an alias or iteration variable. However, as in SQL, the syntax allows the keyword AS to be omitted. For example, the FROM clause above could have been written like this:
FROM customer c, order o
Omitting the keyword AS does not affect the binding of variables. The FROM clause in this example binds variables c and o whether the keyword AS is used or not.
In certain cases, a variable is automatically bound even if no alias or variable-name is specified. Whenever an expression could have been followed by an AS subclause, if the expression consists of a simple name or a path expression, that expression binds a variable whose name is the same as the simple name or the last step in the path expression. Here are some examples:
FROM customer, orderbinds iteration variables named
customerand
order
GROUP BY address.zipcodebinds a variable named
zipcode
SELECT item[0].pricebinds a variable named
price
Note that a FROM clause iterates over a collection (usually a dataset), binding a variable to each member of the collection in turn. The name of the collection remains in scope, but it is not a variable. For example, consider this FROM clause used in a self-join:
FROM customer AS c1, customer AS c2
This FROM clause joins the customer dataset to itself, binding the iteration variables c1 and c2 to objects in the left-hand-side and right-hand-side of the join, respectively. After the FROM clause, c1 and c2 are in scope as variables, and customer remains accessible as a dataset name but not as a variable.
Special rules for GROUP BY:
If a GROUP BY clause specifies an expression that has no explicit alias, it binds a pseudo-variable that is lexicographically identical to the expression itself. For example:
GROUP BY salary + bonusbinds a pseudo-variable named
salary + bonus.
This rule allows subsequent clauses to refer to the grouping expression (salary + bonus) even though its constituent variables (salary and bonus) are no longer in scope. For example, the following query is valid:
FROM employee GROUP BY salary + bonus HAVING salary + bonus > 1000 SELECT salary + bonus, COUNT(*) AS how_many
While it might have been more elegant to explicitly require an alias in cases like this, the pseudo-variable rule is retained for SQL compatibility. Note that the expression
salary + bonusis not actually evaluated in the HAVING and SELECT clauses (and could not be since
salaryand
bonusare no longer individually in scope). Instead, the expression
salary + bonusis treated as a reference to the pseudo-variable defined in the GROUP BY clause.
A GROUP BY clause may be followed by a GROUP AS clause that binds a variable to the group. The purpose of this variable is to make the individual objects inside the group visible to subqueries that may need to iterate over them.
The GROUP AS variable is bound to a multiset of objects. Each object represents one of the members of the group. Since the group may have been formed from a join, each of the member-objects contains a nested object for each variable bound by the nearest FROM clause (and its LET subclause, if any). These nested objects, in turn, contain the actual fields of the group-member. To understand this process, consider the following query fragment:
FROM parts AS p, suppliers AS s WHERE p.suppno = s.suppno GROUP BY p.color GROUP AS g
Suppose that the objects in
partshave fields
partno,
color, and
suppno. Suppose that the objects in suppliers have fields
suppnoand
location.
Then, for each group formed by the GROUP BY, the variable g will be bound to a multiset with the following structure:
[ { "p": { "partno": "p1", "color": "red", "suppno": "s1" }, "s": { "suppno": "s1", "location": "Denver" } }, { "p": { "partno": "p2", "color": "red", "suppno": "s2" }, "s": { "suppno": "s2", "location": "Atlanta" } }, ... ]
Scoping
In general, the variables that are in scope at a particular position are those variables that were bound earlier in the current query block, in outer (enclosing) query blocks, or in a WITH clause at the beginning of the query. More specific rules follow.
The clauses in a query block are conceptually processed in the following order:
FROM (followed by LET subclause, if any)
WHERE
GROUP BY (followed by LET subclause, if any)
HAVING
SELECT or SELECT VALUE
ORDER BY
OFFSET
LIMIT
During processing of each clause, the variables that are in scope are those variables that are bound in the following places:
In earlier clauses of the same query block (as defined by the ordering given above).
Example:
FROM orders AS o SELECT o.dateThe variable
oin the SELECT clause is bound, in turn, to each object in the dataset
orders.
In outer query blocks in which the current query block is nested. In case of duplication, the innermost binding wins.
In the WITH clause (if any) at the beginning of the query.
However, in a query block where a GROUP BY clause is present:
In clauses processed before GROUP BY, scoping rules are the same as though no GROUP BY were present.
In clauses processed after GROUP BY, the variables bound in the nearest FROM-clause (and its LET subclause, if any) are removed from scope and replaced by the variables bound in the GROUP BY clause (and its LET subclause, if any). However, this replacement does not apply inside the arguments of the five SQL special aggregating functions (MIN, MAX, AVG, SUM, and COUNT). These functions still need to see the individual data items over which they are computing an aggregation. For example, after
FROM employee AS e GROUP BY deptno, it would not be valid to reference
e.salary, but
AVG(e.salary)would be valid.
Special case: In an expression inside a FROM clause, a variable is in scope if it was bound in an earlier expression in the same FROM clause. Example:
FROM orders AS o, o.items AS i
The reason for this special case is to support iteration over nested collections.
Note that, since the SELECT clause comes after the WHERE and GROUP BY clauses in conceptual processing order, any variables defined in SELECT are not visible in WHERE or GROUP BY.
Therefore the following query will not return what might be the expected result (since in the WHERE clause,
pay will be interpreted as a field in the
emp object rather than as the computed value
salary + bonus):
SELECT name, salary + bonus AS pay FROM emp WHERE pay > 1000 ORDER BY pay
The likely intent of the query above can be accomplished as follows:
FROM emp AS e LET pay = e.salary + e.bonus WHERE pay > 1000 SELECT e.name, pay ORDER BY pay
Note that variables defined by
JOIN subclauses are not visible to other subclauses in the same
FROM clause.
This also applies to the
FROM variable that starts the
JOIN subclause.
Resolving Names
The process of name resolution begins with the leftmost identifier in the name. The rules for resolving the leftmost identifier are:
In a FROM clause: Names in a FROM clause identify the collections over which the query block will iterate. These collections may be stored datasets or may be the results of nested query blocks. A stored dataset may be in a named dataverse or in the default dataverse. Thus, if the two-part name
a.bis in a FROM clause, a might represent a dataverse and
bmight represent a dataset in that dataverse. Another example of a two-part name in a FROM clause is
FROM orders AS o, o.items AS i. In
o.items,
orepresents an order object bound earlier in the FROM clause, and items represents the items object inside that order.
The rules for resolving the leftmost identifier in a FROM clause (including a JOIN subclause), or in the expression following IN in a quantified predicate, are as follows:
If the identifier matches a variable-name that is in scope, it resolves to the binding of that variable. (Note that in the case of a subquery, an in-scope variable might have been bound in an outer query block; this is called a correlated subquery.)
Otherwise, if the identifier is the first part of a two-part name like
a.b, the name is treated as dataverse.dataset. If the identifier stands alone as a one-part name, it is treated as the name of a dataset in the default dataverse. An error will result if the designated dataverse or dataset does not exist.
Elsewhere in a query block: In clauses other than FROM, a name typically identifies a field of some object. For example, if the expression
a.bis in a SELECT or WHERE clause, it’s likely that
arepresents an object and
brepresents a field in that object.
The rules for resolving the leftmost identifier in clauses other than the ones listed in Rule 1 are:
If the identifier matches a variable-name that is in scope, it resolves to the binding of that variable. (In the case of a correlated subquery, the in-scope variable might have been bound in an outer query block.)
(The "Single Variable Rule"): Otherwise, if the FROM clause (or a LET clause if there is no FROM clause) in the current query block binds exactly one variable, the identifier is treated as a field access on the object bound to that variable. For example, in the query
FROM customer SELECT address, the identifier address is treated as a field in the object bound to the variable customer. At runtime, if the object bound to customer has no
addressfield, the
addressexpression will return
missing. If the FROM clause (and its LET subclause, if any) in the current query block binds multiple variables, name resolution fails with an "ambiguous name" error. Note that the Single Variable Rule searches for bound variables only in the current query block, not in outer (containing) blocks. The purpose of this rule is to permit the compiler to resolve field-references unambiguously without relying on any schema information.
Exception: In a query that has a GROUP BY clause, the Single Variable Rule does not apply in any clauses that occur after the GROUP BY because, in these clauses, the variables bound by the FROM clause are no longer in scope. In clauses after GROUP BY, only Rule 2.1 applies.
In an ORDER BY clause following a UNION ALL expression:
The leftmost identifier is treated as a field-access on the objects that are generated by the UNION ALL. For example:
query-block-1 UNION ALL query-block-2 ORDER BY salary
In the result of this query, objects that have a foo field will be ordered by the value of this field; objects that have no foo field will appear at at the beginning of the query result (in ascending order) or at the end (in descending order.)
Once the leftmost identifier has been resolved, the following dots and identifiers in the name (if any) are treated as a path expression that navigates to a field nested inside that object. The name resolves to the field at the end of the path. If this field does not exist, the value
missingis returned. | https://docs.couchbase.com/server/6.0/analytics/appendix_3_resolution.html | 2019-06-16T05:30:07 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.couchbase.com |
Reindexing Content Without Disrupting Service in Production¶
In some scenarios it’s not possible/appropriate to delete a live index and wait for the index to rebuild in production. Perhaps the index is driving dynamic features on the site that will break while the index is empty or being rebuilt. In these scenarios you need a process for building the index off line and swapping it in.
Step 1: Prepare a re-index core¶
The first step is to prepare an additional empty core on Solr where you can index the content:
- Go to(
SOLR_PORTin Authoring is normally 8694, while in Delivery it’s 8695).
- Click on
Core Adminon the left menu.
- Click on
Add Core. A popup will appear with the core properties you need to fill. Name the new core however you want, making sure it’s not the same name as the current core (e.g.
editorial-tmp),
instanceDirshould be the path to the
crafter_configsconfigset in Solr , which should be under
CRAFTER/bin/solr/server/solr/configsets/crafter_configs) and
dataDirshould be the path of the core’s data directory under Crafter’s
data/indexesdirectory (e.g.
CRAFTER/data/indexes/editorial-tmp/data/). Leave
configand
schemawith their default values, and click on
Add Core.
Step 2: Content freeze¶
Once you are about to start a re-index you need to freeze your authoring/editing activity. If content is being updated in the live environment while you are rebuilding your indexes, you may miss updates. Ask the authors not to publish during your re-index process.
Step 2: Set up a new temporary target¶
The next step is to create a temporary deployment target that is basically a copy of the production target, but with a different ID. The easiest way to do this is to:
- Go to the
CRAFTER/deployer/targetsfolder.
- Copy and paste the target’s YAML to somewhere temporary outside the
targetsfolder (to avoid the Deployer from picking the new target while you’re modifying it).
- Replace the original site name from the YAML file name with the name of the Solr core you just created (e.g.
editorial-tmp-prod.yaml).
- Change the
siteNameproperty value inside the YAML to the name of the Solr core (e.g.
editorial-tmp).
- Copy the the YAML file back to the
targetsfolder.
Step 4: Re-index¶
On a live environment, the Deployer will execute the deployment of a target on schedule every minute by default, so after creating the new temporary target
the Deployer should pick it up automatically and start re-indexing. If the Deployer is not working on a schedule, you can follow the process in
Reindexing Content for Search and Queries, starting in
Step 2: Invoke the reprocessing and using the
siteName (or Solr core name) you set in the temporary target
YAML.
Step 5: Wait-tmp-prod finished in 2.359 secs 2017-07-25 16:52:03.763 INFO 21896 --- [pool-2-thread-1] org.craftercms.deployer.impl.TargetImpl : ------------------------------------------------------------
Step 6: Swap indexes¶
Now that indexing is complete you need to load the re-indexed content. Follow these steps:
- In the Solr console (from Step 1), under the
Core Admin, click
Swap Coresto swap from the production core to the temporary core.
- Backup the original core folder under
CRAFTER/data/indexes(should have the same name as the site, e.g.
editorial).
- Consider creating a copy of the re-indexed core with the original name and swapping again to preserve file/folder names:
- Go to the
CRAFTER/data/indexesand delete the original core folder.
- Rename the swapped core folder (
editorial-tmp) to the original core folder name (
editorial).
- Swap the cores again.
- Unload the temporary core.
Step 7: Unfreeze Content¶
Now that you are certain everything is working as it should, notify your authors that they may start editing and publishing activity. | https://docs.craftercms.org/en/3.0/system-administrators/activities/reindexing-content-in-prod.html | 2019-06-16T05:51:51 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.craftercms.org |
Profields custom fields
eazyBI add-on for Jira
Profields versions 4.10 are supported (including Profield custom field import) by eazyBI version 4.2.3 or later.
Profields versions 5.0 and 5.1 are not supported.
Profields versions 5.2 or later are supported by eazyBI version 4.4.0 or later.
Profields is a Jira add-on for defining custom fields for Jira projects (and not for individual issues).
The following Profields custom field types are supported:
All available custom field names are shown with a Profields prefix (to ensure that names do not overlap with the standard and custom Jira issue fields).
Imported measures and dimensions are shown in a separate Profields group in the Analyze tab.
All selected Profields custom field values are updated during each Jira import (also during the incremental import all fields are updated).
See also an interactive Profields and eazyBI integration guide on the Deiser blog.
Private eazyBI
Profields versions 5.5.5 or later are supported by Private eazyBI version 4.5.1 or later.
You can import Profields custom fields also when using Private eazyBI. Profields support is disabled by default (to avoid unnecessary REST API requests). Add the following settings in the
config/eazybi.toml configuration file to enable Profields support.
[source_application.jira.profields] enable = true | https://docs.eazybi.com/eazybijira/data-import/data-from-jira-and-apps/profields-custom-fields | 2019-06-16T04:39:19 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.eazybi.com |
Tool
Strip.
Tool Renderer Changed Strip.
Tool Renderer Changed Strip.
Tool Renderer Changed Strip.
Event
Renderer Changed
Definition
public: event EventHandler ^ RendererChanged;
public event EventHandler RendererChanged;
member this.RendererChanged : EventHandler
Public Custom Event RendererChanged As EventHandler
Examples
The following code example demonstrates the use of this member. In the example, an event handler reports on the occurrence of the Renderer RendererChanged event.
private void ToolStrip1_RendererChanged(Object sender, EventArgs e) { MessageBox.Show("You are in the ToolStrip.RendererChanged event."); }
Private Sub ToolStrip1_RendererChanged(sender as Object, e as EventArgs) _ Handles ToolStrip1.RendererChanged MessageBox.Show("You are in the ToolStrip.RendererChanged event.") End Sub
Remarks
For more information about handling events, see Handling and Raising Events. | https://docs.microsoft.com/en-us/dotnet/api/system.windows.forms.toolstrip.rendererchanged?view=netframework-4.8 | 2019-06-16T06:01:12 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.microsoft.com |
Using the tools
[ This article is for Windows 8.x and Windows Phone 8.x developers writing Windows Runtime apps. If you’re developing for Windows 10, see the latest documentation ]
Learn how to use the development and design tools to create Windows apps that can run on multiple Windows devices.
Dig deeper into the rich features of Visual Studio to learn how it can increase your productivity when you write, debug, and test code:
- Learn more about the Visual Studio IDE.
- Debug and test your apps.
- Design and build user interfaces with Blend for Visual Studio. | https://docs.microsoft.com/en-us/previous-versions/windows/apps/bg124286%28v%3Dwin.10%29 | 2019-06-16T05:53:35 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['images/bg124286.uap_vs%28en-us%2cwin.10%29.png', None],
dtype=object) ] | docs.microsoft.com |
WSFC Disaster Recovery through Forced Quorum (SQL Server)
SQL Server
Azure SQL Database
Azure SQL Data Warehouse
Parallel Data Warehouse WSFC Disaster Recovery through the Forced Quorum Procedure
-
-
Before You Start
Prerequisites
The Forced Quorum Procedure assumes that a healthy quorum existed before the quorum failure.
Warning
The user should be well-informed on the concepts and interactions of Windows Server Failover Clustering, WSFC Quorum Models, SQL Server, and the environment's specific deployment configuration.
For more information, see: Windows Server Failover Clustering (WSFC) with SQL Server, WSFC Quorum Modes and Voting Configuration (SQL Server)
Security
The user must be a domain account that is member of the local Administrators group on each node of the WSFC cluster.
WSFC Disaster Recovery through the Forced Quorum Procedure
Remember that quorum failure will cause all clustered services, SQL Server instances, and Always.
Tip
On a responsive instance of SQL Server 2017, you can obtain information about the health of availability groups that possess an availability replica on the local server instance by querying the sys.dm_hadr_availability_group_states dynamic management view (DMV).
Note
The forced quorum setting has a cluster-wide affect to block quorum checks until the logical WSFC cluster achieves a majority of votes and automatically transitions to a regular quorum mode of operation..
Warning
Ensure that each node that you start can communicate with the other newly online nodes. Consider disabling the WSFC service on the other nodes. Otherwise, you run the risk of creating more than one quorum node set; that is a split-brain scenario. If your findings in step 1 were accurate, this should not occur..
Tip
At this point, the nodes and SQL Server instances in the cluster may appear to be restored back to regular operation. However, a healthy quorum may still not exist. Using the Failover Cluster Manager, or the Always On Dashboard within SQL Server Management Studio, or the appropriate DMVs, verify that a quorum has been restored..
Note
If you run the WSFC Validate a Configuration Wizard when an availability group listener exists on the WSFC cluster, the wizard generates the following incorrect warning message:
"The RegisterAllProviderIP property for network name 'Name:<network_name>' is set to 1 For the current cluster configuration this value should be set to 0."
Please ignore this message..
Related Tasks
Force a WSFC Cluster to Start Without a Quorum
Perform a Forced Manual Failover of an Availability Group (SQL Server)
View Cluster Quorum NodeWeight Settings
Configure Cluster Quorum NodeWeight Settings
Use the AlwaysOn Dashboard (SQL Server Management Studio)
Related Content
See Also
Windows Server Failover Clustering (WSFC) with SQL Server
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/sql/sql-server/failover-clusters/windows/wsfc-disaster-recovery-through-forced-quorum-sql-server?view=sql-server-2017 | 2019-06-16T05:06:54 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.microsoft.com |
This section describes how to perform basic debugging tasks using the WinDbg debugger.
Details are given in the following topics:
Debugging a User-Mode Process Using WinDbg
Debugging a UWP app using WinDbg
Opening a Dump File Using WinDbg
Live Kernel-Mode Debugging Using WinDbg
Ending a Debugging Session in WinDbg
Setting Symbol and Executable Image Paths in WinDbg
Remote Debugging Using WinDbg
Entering Debugger Commands in WinDbg
Using the Command Browser Window in WinDbg
Setting Breakpoints in WinDbg
Viewing the Call Stack in WinDbg
Assembly Code Debugging in WinDbg
Source Code Debugging in WinDbg
Viewing and Editing Memory in WinDbg
Viewing and Editing Global Variables in WinDbg
Viewing and Editing Local Variables in WinDbg
Viewing and Editing Registers in WinDbg
Controlling Processes and Threads in WinDbg
Configuring Exceptions and Events in WinDbg
Keeping a Log File in WinDbg
Send comments about this topic to Microsoft | https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/debugging-using-windbg | 2017-08-17T00:09:04 | CC-MAIN-2017-34 | 1502886102757.45 | [] | docs.microsoft.com |
Prompts and Messages
From PhpCOIN Documentation
Almost every prompt or message that phpCOIN displays to users is contained within language files. The directory /coin_lang/lang_xxx contains all the files for a particular language, where xxx is the language name, such as lang_english, lang_french, etc..
To change any of the prompts or messages that phpCOIN displays or uses, open the appropriate language file and make your desired changes. lang_config.php contains mostly configuration arrays, lang_base.php contains mostly system-wide text, and lang_yyyyy.php contains the strings for the yyyyy module.
When you upgrade phpCOIN your changes will be overwritten by the new phpCOIN files, unless you setup over-ride files.
Text for email messages is accessible once you are logged in as an admin. Admin -> eMail Templates for the email messages, and for the "nag" emails Admin -> Reminder Templates.
Text for pages such as the index page, FAQ, articles/news and more is also accessible to a logged-in admin. | http://docs.phpcoin.com/index.php?title=Prompts_and_Messages | 2017-08-16T23:38:01 | CC-MAIN-2017-34 | 1502886102757.45 | [] | docs.phpcoin.com |
Using Action Templates
Reusing animation and assets is an important aspect of cut-out animation. This is why Harmony includes a library for storing all the reusable information.
When you created the character rig, you most likely created a master template along with some action templates
To save time, you can take an action that you've already animated, such as a walk-cycle or jump, and reuse it. You can store the initial animation's keyframes in the Library view, then drag it into a master template of a new scene.
You can import a master template into the Timeline view
To insert an action template into a master template, the layer ordering has to be exactly the same. If it's inconsistent, the templates cannot be combined.
You can create a single keyframe action template of the different views (front, three-quarter or side view). Then import and insert them into the animation to turn the character. The same pattern can be created for a head, arm, full upper body, etc.
If the master template you are importing was created in the Node view, make sure to import it first and drop it into the Node view or the left side of the Timeline view. Failure to do this may break some node system connections.
If you created templates for different body parts, you can reuse them in your scene by dragging them to your layers. You can also open a template as a folder and select a particular drawing from it and drag it onto a drawing layer. | http://docs.toonboom.com/help/harmony-12/premium/Content/_CORE/_Workflow/022_Cut-out_Animation/060_H1_Using_Action_Templates.html | 2017-08-16T23:48:07 | CC-MAIN-2017-34 | 1502886102757.45 | [array(['../../../Resources/Images/_ICONS/Home_Icon.png', None],
dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePremium.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stageAdvanced.png',
'Toon Boom Harmony 12 Stage Advanced Online Documentation'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stageEssentials.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/controlcenter.png',
'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/scan.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/Activation.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/_ICONS/download.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Cut-out/expression.png', None],
dtype=object)
array(['../../../Resources/Images/HAR/Stage/Cut-out/HAR11/HAR11_cutout_reuseExtraDrawings.png',
None], dtype=object) ] | docs.toonboom.com |
Anypoint Connector Development
This document quickly walks you through the decisions to make and the steps to take to get an Anypoint Connector development project started, get the connector working, and enhance it with the functionality you need. You can use this as a roadmap for your connector development effort. JDK 7 for DevKit 3.6 or 3.7.
-
Install Anypoint Studio.
Install the Anypoint DevKit Plugin.
Refer to Setting Up Your Development Environment for detailed instructions.
Step 2 - Setting Up Your API Access
Prepare a test environment for your connector. Details vary greatly based on the target system.
SaaS – API Access for detailed instructions.
Step 3 - Creating an Anypoint Connector Project
Follow the process in Creating a Java SDK-Based Connector to start a connector development project in your development environment.
Add any required Java libraries for your target system, such as client libraries or class libraries provided for the target system.
When finished, you should have a scaffold project, containing a Java class with the most basic functionality for a connector in place.
Step 4 - Authentication
If your API requires authentication, understand the authentication methods your API uses and select those that your connector supports. Based on your use case, select an authentication method from the table below. For more details, read Authentication which introduces all of the supported methods.
Step 5 - Defining Attributes, Operations, and a Data Model
Most of the Java code you write for your Connector Attributes and Operations. Anypoint Exchange.
Refer to Packaging Your Connector for Release for full details.
See Also
Understand Setting Up Your Development Environment. | https://docs.mulesoft.com/anypoint-connector-devkit/v/3.7/anypoint-connector-development | 2017-08-16T23:41:53 | CC-MAIN-2017-34 | 1502886102757.45 | [] | docs.mulesoft.com |
Domains
From PhpCOIN Documentation
About Domains
The Domains module is a series of tables that allow a web-host to track domain name registration and expiry information, along with the server that the domain is hosted on (if the web-host has more than one server), and a click able link to the "Control Panel", if any, for the domain.
Domain records can be created manually by an admin, or automatically when a surfer places an order for a product and the product has a domain name component.
phpCOIN has a built-in WHOIS module to allow customers to search for available domains, as well as a link to click for available domains that can take them back to the order form or to an external registration site, whatever the web-host needs.
The "Summary" page automatically lists any domains dues to expire within 60 days, giving you plenty of time to renew.
The admin parameter to disable domains (and the associated WHOIS lookups) will remove all menu links and buttons that reference domains and/or WHOIS, and if a surfer tries to access domains or WHOIS data directly via URL parameters they will receive a "disabled" notice.
Add/Remove Domain TLD's
To add or remove Domain TLD's, just go to: Admin -> WHOIS Lookups and set up whatever extensions you need. Don't forget to set them to active, otherwise they will not show in the extension list. | http://docs.phpcoin.com/index.php?title=Domains | 2017-08-16T23:44:25 | CC-MAIN-2017-34 | 1502886102757.45 | [] | docs.phpcoin.com |
What is phpCOIN
From PhpCOIN Documentation
phpCOIN is a WebWare application, especially suited for anyone that needs to track clients and/or support requests and/or orders and/or invoices. The general label of billing manager or client manager does not really describe phpCOIN because it performs various other tasks not found in these common packages., so the majority of our users are web-hosts. Nevertheless, phpCOIN is so versatile that we also have accountants, lawyers, and a school band using it.
The main functionality of phpCOIN is:
- Present products and/or services to surfers for information and purchase.
- During the ordering process, collect required client information, and direct them to the appropriate third party billing vendor by means of PayLinks.
- Send manual and/or automatic billing invoices to the client.
- Generate account activation email for the client, to convey pertinent setup information.
- Allow the site owner to track client information, orders history, billing history (invoices), support request history (helpdesk), email history, domains history, and more.
- Allow the client to track information regarding orders history, billing history (invoices), support request history (helpdesk), email history, domains history.
- Provide a means for the site owner to contact clients (all or individual) via email.
- Provide additional modules to permit the site admin to present additional information to the potential / existing client.
- Provide theme support for customization through basic function editing and CSS modifications.
- Provide support for language files.
- "Domain" and/or WHOIS Lookup functionality can be turned off (as can everything else except clients), which means that phpCOIN can be used to handle any type of business.
phpCOIN is webware, which means that it is a set of scripts written in php that interfaces with a MySQL database and runs on a web-server. | http://docs.phpcoin.com/index.php?title=What_is_phpCOIN | 2017-08-16T23:33:41 | CC-MAIN-2017-34 | 1502886102757.45 | [] | docs.phpcoin.com |
Source]')
A priority queue is common use for a heap, and it presents several implementation challenges: | http://docs.python.org/3.3/library/heapq.html | 2014-03-07T12:25:01 | CC-MAIN-2014-10 | 1393999642517 | [] | docs.python.org |
User Guide
Local Navigation
About tabbed browsing
With tabbed browsing, you can open multiple webpages on your BlackBerry smartphone smartphone uses.
Related tasks
Related reference
Next topic: Open, close, or switch between tabs
Previous topic: Zoom in to or out from a webpage
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/36023/About_tabbed_browsing_61_1573136_11.jsp | 2014-03-07T12:32:29 | CC-MAIN-2014-10 | 1393999642517 | [] | docs.blackberry.com |
Message-ID: <694795211.21539.1394195247795.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_21538_464496808.1394195247795" ------=_Part_21538_464496808.1394195247795 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Sorting of people --> if we get the most influential people sorted ab= ove then this motivates to do something. Maybe people should have a k= arma when they get a lot of MTEs (My Thought Exactly)=20
Meeting case --> "You're meeting is 60% prepared" similar t= o linked saying "your profile is 60% complete" to encourage peopl= e to have well prepared meeting and signal this to participants.=20
We should not assume long attentionspan to learn the concepts and how to= use it. First interactions need to be rewarding.=20 | http://docs.codehaus.org/exportword?pageId=209651033 | 2014-03-07T12:27:27 | CC-MAIN-2014-10 | 1393999642517 | [] | docs.codehaus.org |
Catkin is a collection of CMake macros and associated code used to build packages used in ROS.
It was initially introduced as part of the Fuerte release where it was used for a small set of base packages. For Groovy it was significantly modified, and used by many more packages.
This version of the documentation covers the Groovy version and also Hydro, which is coming soon. | http://docs.ros.org/groovy/api/catkin/html/ | 2014-03-07T12:27:15 | CC-MAIN-2014-10 | 1393999642517 | [] | docs.ros.org |
Help Center
Local Navigation
Add a contact that uses text messaging
If you add a person who uses text messaging as a contact to BlackBerry® Messenger, you receive text messages from the person and you can send text messages to the person.
- On the Home screen or in the Instant Messaging folder, click the BlackBerry Messenger icon.
- On the contact list screen, press the Menu key.
- Click Invite Contact.
- Click Add a text messaging contact.
- Type part or all of the contact information.
- Click the contact information.
- If necessary, change the Category field.
- If necessary, type the name that you want to appear in your BlackBerry Messenger contact list.
- Click Add Contact.
Next topic: Display your profile barcode on your device
Previous topic: Add a contact by typing an email address or PIN
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/18720/Add_a_contact_that_uses_text_messaging_865698_11.jsp | 2014-03-07T12:30:41 | CC-MAIN-2014-10 | 1393999642517 | [] | docs.blackberry.com |
Using setSize before reading an image file tells ImageMagick to resize the image immediately on load - this can give a substantial increase in performance time and save memory and disk resources for large images:
<?php
$image = new Imagick();
$image->setSize(800,600);
$image->readImage($file);
?>
This might also save you having to call thumbnailImage to resize the image.
On my server, this only made a difference with jpgs - pngs and gifs were loaded at full size which took much longer (30s or more compared to 6s for a similar sized jpg). | http://docs.php.net/manual/de/imagick.setsize.php | 2014-03-07T12:28:08 | CC-MAIN-2014-10 | 1393999642517 | [] | docs.php.net |
In the final release of Groovy-Eclipse 2.1.1, just 6 weeks after our previous release of 2.1.0, there are many exciting new features to look forward to.
You can use the following update site to install this release. To install, copy and paste this link into your Eclipse update manager:
And a zipped version of the update site is available at:
You can install from the zip by pointing your Eclipse update manager to the downloaded zip file and following the installation instructions provided by the update manager.
Outline
- Inferencing inside of closures
- Quickfixes
- Inferencing of statically imported fields and methods
- Content assist for Constructors
- Improved semantic highlighting
- Organize imports improvements
- Create Groovy Script
- Outline view for binary Groovy types
- Bookmarks and Tasks
- Groovy Event Console
- Compatibility
- Bug fixes
- What's 60 issues have been addressed for this release.
What's next?
The 2.1.1 final release is scheduled for a week from now to coincide with the 2.5.2 release of the SpringSource Tool Suite. would like to see), please raise them in our issue tracker. | http://docs.codehaus.org/pages/viewpage.action?pageId=188613155 | 2014-03-07T12:25:51 | CC-MAIN-2014-10 | 1393999642517 | [array(['/download/attachments/188612823/closure_inferencing.png?version=1&modificationDate=1292264351221&api=v2',
None], dtype=object)
array(['/download/attachments/188612823/closure_content_assist.png?version=2&modificationDate=1292264409722&api=v2',
None], dtype=object)
array(['/download/attachments/188612823/quickfix_imports.png?version=1&modificationDate=1292264937615&api=v2',
None], dtype=object)
array(['/download/attachments/188612823/quickfix_imports2.png?version=1&modificationDate=1292264983253&api=v2',
None], dtype=object)
array(['/download/attachments/188612823/quickfix_imports3.png?version=1&modificationDate=1292265160107&api=v2',
None], dtype=object)
array(['/download/attachments/188612823/quickfix_convert.png?version=1&modificationDate=1292265499417&api=v2',
None], dtype=object)
array(['/download/attachments/188612823/static_imports.png?version=1&modificationDate=1292265616943&api=v2',
None], dtype=object)
array(['/download/attachments/188612823/constructor_content_assist.png?version=1&modificationDate=1292266447694&api=v2',
None], dtype=object)
array(['/download/attachments/188612823/semantic_highlighting.png?version=1&modificationDate=1292269240610&api=v2',
None], dtype=object)
array(['/download/attachments/188612823/organize_imports_before.png?version=1&modificationDate=1292269424464&api=v2',
None], dtype=object)
array(['/download/attachments/188612823/organize_imports_after.png?version=1&modificationDate=1292269457933&api=v2',
None], dtype=object)
array(['/download/attachments/188612823/new_groovy_script.png?version=1&modificationDate=1292269678461&api=v2',
None], dtype=object)
array(['/download/attachments/188612823/add_bookmark_task.png?version=1&modificationDate=1292269785478&api=v2',
None], dtype=object)
array(['/download/attachments/188612823/console.png?version=2&modificationDate=1292287428748&api=v2',
None], dtype=object) ] | docs.codehaus.org |
Introduction
As of MySQL 4.1, it is possible to use prepared statements with Connector/NET. Use of prepared statements can provide significant performance improvements on queries that are executed more than once..
Visual Basic
C#); } | http://doc.docs.sk/mysql-refman-5.5/connector-net-programming-prepared.html | 2022-08-08T04:37:33 | CC-MAIN-2022-33 | 1659882570765.6 | [] | doc.docs.sk |
9.0.003.11
Advisors Genesys Adapter Release Notes
Advisors Genesys Adapter is platform-independent software. You can deploy the installation package on any operating system that Advisors Genesys Adapter supports.
Helpful Links
Releases Info
Product Documentation
Genesys Products
What's New
This release contains the following new features and enhancements:
- Compatibility–This release is compatible with the following Advisors components:
- Advisors Platform release 9.0.003.11
- Contact Center Advisor/Workforce Advisor release 9.0.003.11
- Frontline Advisor release 9.0.003.11
Resolved Issues
This release contains the following resolved issues:
This release includes security fixes related to potential Apache Log4j vulnerabilities (CVE-2021-410, CVE-2019-17571). In particular, Log4j 1.2.17 has been replaced with reload4j 1.2.20. (PLT-8376)
Upgrade Notes
No special procedure is required to upgrade to release 9.0.003.11.
This page was last edited on June 1, 2022, at 04:54. | https://docs.genesys.com/Documentation/RN/9.0.x/pma-aga90rn/pma-aga9000311 | 2022-08-08T04:10:29 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.genesys.com |
This section is related to building a DAO on the Solana network and covers:
- What is a DAO?
- Creating a DAO
- Adding members to a DAO after the DAO is created
- Treasury Accounts
- Treasury Domain Names
tip
After this read, you'll be able to create and manage your own DAO.
You'll need to use a Solana wallet (we recommend Phantom) and own SOL. For the convenience, it is possible to use the
devnet to test if your DAO is performing as it's supposed to. | https://docs.realms.today/ | 2022-08-08T03:43:22 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.realms.today |
Device Operations
Methods to setup the device, get device information and change device options.
Initialization
begin()
This method is called to initialize the TMF882X library and connect to the TMF882X device. This method should be called before accessing the device.
The I2C address of the is an optional argument. If not provided, the default address is used.
During the initialization process, the device is opened and the runtime firmware loaded onto the device. The TMF882X device is them placed into "APP" mode.
loadFirmware()
To operate the TMF882X device, runtime firmware must be loaded. At startup, this library loads a default firmware version on library initialization.
This method allows the library user to set the firmware version on the device if a newer version is available from AMS.
isConnected()
Called to determine if a TMF882X device, at the provided i2c address is connected.
setI2CAddress()
Called to change the I2C address of the connected device.
getApplicationVersion()
Returns the version of the "Application" software running on the connected TMF882X device. See the TMF882X data sheet for more information regarding application software
getDeviceUniqueID()
Returns the unique ID of the connected TMF882X.
Note
This method uses an ID structure as defined by the AMS TMF882X SDK to store the ID value.
Debug and Development
setDebug()
Set the debug state fo the SDK. To use the full debug capabilities of the SDK, debug should be enabled before calling init/begin() on the library
getDebug()
Returns the current debug setting of the library
setInfoMessages()
Enable/Disable the output of info messages from the AMS SDK.
setMessageLevel()
Used to set the message level of the system.
The value passed in should be one, or a combination of the following flags.
getMessageLevel()
Return the current message settings. See setMessageLevel() description for possible values
setOutputDevice()
This method is called to provide an output Serial device that the is used to output messages from the underlying AMS SDK.
This is needed when debug or info messages are enabled in the library
getTMF882XContext()
Returns the context structure used by this library when accessing the underlying TMF882X SDK.
With this structure, users of this library can make direct calls to the interface functions of the TMF882X SDK.
Warning
Calling the TMF882X SDK functions directly could impact the operation of this library. Use this option with caution. | https://docs.sparkfun.com/SparkFun_Qwiic_TMF882X_Arduino_Library/api_device/ | 2022-08-08T05:21:55 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.sparkfun.com |
event query syntax
The
event operator in the Splunk Infrastructure Monitoring Add-on retrieves Splunk Infrastructure Monitoring events generated by detectors. It uses the following syntax:
| sim event query=<string> limit=<integer> offset=<integer> org_id=<string>
The
POST /v2/event API endpoint returns Splunk Infrastructure Monitoring-generated events. The events have a name prefixed by sf_ to indicate that Splunk Infrastructure Monitoring owns them.
Search parameters
Query parameters
Usage examples
The following search gets incoming event data where the event category is
ALERT. These events occur when a detector triggers or clears an alert.
| sim event query="sf_eventCategory:ALERT" limit = 10 offset = 1
The following search gets events that have been created by the rule named
ITSI_Rule_1. The
is field must be
anomalous which means a detector created the event.
| sim event query="sf_eventType:*ITSI_Rule_1* AND is:anomalous"
The following search gets all events generated by the detector with the specified detector ID. The
sf_eventType field is the detector ID concatenated with the rule name.
| sim event query="sf_eventType:*EVqZqZvA0AA__EUkDNBvA0AA*"
The following search fetches events created by a rule with a name containing
Rule_1 from a specific Infrastructure Monitoring organization:
| sim event query="sf_eventType:*Rule_1* OR sf_resolutionMs:1000" org_id=EUdM8ESA4AA
The following search gets all events generated by the detector with the specified detector ID. The
sf_eventType field is the detector ID concatenated with the rule name:
| sim event query="NOT sf_eventType:*EVqZqZvA0AA__EUkDNBvA0AA* AND was:ok"
The following search gets events where the condition in parentheses is not true:
| sim event query="NOT (sf_eventCategory:*ALERT* AND was:ok)"
event query response
The response to an
event query request is a list of all the events with various fields matching the query and time range. All events have the following fields:
id
is
sf_eventCategory
sf_eventCreatedOnMs
sf_eventType
sf_incidentId
sf_notificationString
sf_resolutionMs
sf_schema
timestamp
tsId
was
In addition, any fields with the prefix
signal_resource correspond to the resource related to the rule specified in Splunk Infrastructure Monitoring. For example:
signal_resource_sf_metric- The metric on which the rule is based, which is the basis for generating events.
signal_resource_value- The value of the metric which caused an event to be generated.
All fields with the prefix
signal_threshold correspond to the thresholds set by users who created rules in Splunk Infrastructure Monitoring. For aggregate events, the resource value appears with the prefix
signal_threshold.
This documentation applies to the following versions of Splunk® Infrastructure Monitoring Add-on: 1.1.0, 1.2.0, 1.2.1, 1.2.2
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/SIMAddon/1.2.2/Install/event | 2022-08-08T04:43:50 | CC-MAIN-2022-33 | 1659882570765.6 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Analyze
Surface Analysis
Analyze
Surface >
UV Coordinates of a Point
The EvaluateUVPt command reports the u and v coordinates of a selected location on a surface.
CreatePoint
Creates a point object on the surface.
Normalized
The u and v parameter ranges are scaled so that the output values are between zero and one (rather than using the real parameter value).
This is useful when you want to know the percentage along parameter space of the point you pick without having to calculate it based on the domain of the surface.
The unscaled u and v parameter values are given.
See: Domain.
Analyze objects
Rhinoceros 7 © 2010-2022 Robert McNeel & Associates. 28-Jul-2022 | http://docs.mcneel.com/rhino/7/help/en-us/commands/evaluateuvpt.htm | 2022-08-08T04:00:09 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.mcneel.com |
This guide is written and tested based on the Video Calls 1.3 which are compatible with eXo Platform 4.3.
Video Calls delivers a real-time video chat experience that allows your company to facilitate the direct communication among individuals and groups, and boost the employee engagement and productivity. Here is what Video Calls brings to you:
1 to 1 and Group Calls: Participate in face-to-face meetings with high-resolution video calls.
Platform-wide integration: Initiate video calls from anywhere you can see activities of co-workers, for example, activity streams, profile page or connection page.
Screen sharing: Share your desktop to your co-workers for the easier collaboration.
Fine-grained permissions: Easily assign the access permissions for video calls to users and groups, and the call types (1 to 1 Calls, or Group Calls, or both).
The Video Calls feature is currently available for the Enterprise edition and works for Mac and Windows users. Since Video Calls 1.2.0, it also supports Chrome on Linux.
In this chapter:
eXo Platform server setup
This topic is for administrators only that gives steps for installing Video Calls and configuring the SightCall keys.
This topic is for users who need to set up SightCall Plugin Installer on their clients before being able to use Video Calls.
Where to launch Video Calls on eXo Platform, and options you can perform when placing video calls, as well as common troubleshooting that you may meet.
Placing a call button in your application
Instructions to re-use the utilities committed by eXo's Weemo (SightCall) extension project. | https://docs-old.exoplatform.org/public/topic/PLF44/eXoAddonsGuide.VideoCalls.html | 2022-08-08T04:56:34 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs-old.exoplatform.org |
EOQ & ROP Calculation
Ranking Table
- Product Ranking Table for Corporate and by Branch including required Turns and Safety Stock Factor based on Annual Sales Dollars or Units.
- Use of dollars or units determined by billing parameter setting.
Purchasing Parameters
- Default Timer for new product added
- Weeks Active
- Number of Weeks with Activity
- Weighting Weeks
- EOQ and ROP rounding
- ROP and EOQ corporate or store
- Use Demand Smoothing
- Internal Review
- ROP/EOQ Use – DC or Corporate
- Branch Replenishment from DC – B/S/N
Ranking Calculation
- Ranking calculated based on Unit Sales or Dollar Sales according to Billing Parameter setting screen 4 – Sales Rank Basis (U/$)
- Run once for all locations
- Calculate Active Weeks from first receipt date to current date
- Ignore items without a first receipt date
- Calculate sales value by week for weighting weeks using invoice costing based on billing parameter setting, screen 2 invoice costing Average or Last (P6)
- Sum these values and the rank will set when the total sales value is less than the next value in the rank parameters.
- The sum of the sales value for all branches is used to calculate the corporate sales rank
Calculate EOQ Values
- This is done for each branch separately
- Age Week Sales
- If Reorder Allowed = N, S or K do nothing
- If Reorder Lock not 0 do nothing
- Calculate the Active Weeks
- Only done for products that have a first receipt date
- Active weeks from First Receipt to current
- If the Active Weeks is > Weeks Active parameter use the parameter value
- If the Active Weeks is 0 do nothing
- Calculate Annual Unit Sales = previous 52 weeks sales.
- If Active weeks is < 52 calculate annual units as sum of previous 52 weeks * 52 / Active Weeks.
- Weeks with Activity = Weeks in Active Weeks that have sales not = 0
- If Weeks with Activity < Number of Weeks with Activity move zero to Vendor and Transfer EOQ
- Assign the EOQ
- Based on the ranking parameter for the part divide the annual unit sales by the turns parameter giving the EOQ Vendor and Transfer.
Calculate Lead Time by Part
- Use the number of receiving’s set on the supplier record. If blank or 0 then default of 8 is used
- Calculate the average lead time for the most recent number receiving’s of the part.
- Compare the average lead time to the highest number of days and add the lower of the difference or 25% of the average lead time.
Calculate ROP Values
- Use DC lead time from branch record.
- If Lock value = 99 do nothing
- Does not use RANK-CUTOFF value
- Reduce the lock value by 1
- Calculate active weeks from first receipt date to current date
- Sum the sales units for the Weighting weeks.
- Compute the daily sales = total sales by week / (weighting weeks * 6). It was decided early on that only 6 business days per week should be used for the daily sales calculation.
- Using the sales ranking and lead time for the part calculate the Vendor ROP. Daily Sales * (Lead Time + Internal Review Time). Round this value based on Purchasing Paramenter settings. Add safety stock percentage and round again.
- Calculate the DC ROP value as Daily Sales * DC Lead Time, then round based on purchasing parameter setting. Add safety stock percentage and round again.
Calculate Corporate Values
- Add the Vendor EOQ for all locations and put the total as the Corporate EOQ
- Add the Vendor ROP for all locations and put the total as the Corporate ROP
Daily Suggested Order
- The daily suggested order calculation can be based the DC or Corporate ROP/EOQ
- Recalculates for all parts each day with the exception of parts on outstanding suggested orders where the order quantity has been modified and not ordered yet.
- Suggested order will buy direct to branch based on supplier setting
- Do not order product set as N for reorder allowed unless there is a customer backorder.
- DC available quantity for replenishment to branches calculated based parameter Branch Replenishment from DC.
- Transfers it the quantity available is equal to or less than the ROP DC order the DC EOQ – Qty Available + ROP DC base on DC Available Quantity. Use branch purchase quantity for transfers
- Corporate Vendor Orders compare the DC available quantity to the Corporate ROP and order when the available is less that the corporate ROP | https://docs.amscomp.com/books/counterpoint/page/eoq-rop-calculation | 2022-08-08T03:35:51 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.amscomp.com |
You're viewing Apigee Edge documentation.
View Apigee X documentation.
On Wednesday, August 14, we will begin releasing a new version of Apigee Edge integrated portal.
New features
The following section summarizes the new feature in this release.
Configure a content security policy
Configure. For more information, see Configure a content security policy.
Bugs fixed
The following bugs are fixed in this release. This list is primarily for users checking to see if their support tickets have been fixed. It's not designed to provide detailed information for all users. | https://docs.apigee.com/release/notes/190814-apigee-edge-public-cloud-release-notes-integrated-portal?hl=es-AR | 2022-08-08T03:50:21 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.apigee.com |
Server service configuration and tuning
This article describes how to configure and tuning Windows Server service.
Applies to: Windows Server 2012 R2
Original KB number: 128167
Summary
Although the Windows Server service is self-tuning, it can also be configured manually through Control Panel Service. Normally, the server configuration parameters are auto-configured (calculated and set) each time you boot Windows. However, if you run NET CONFIG SERVER in conjunction with the
/AUTODISCONNECT,
/SERVCOMMENT OR
/HIDDEN switches the current values for the automatically tuned parameters are displayed and written to the registry. Once these parameters are written to the registry, you can't tune the Server service using Control Panel Networks.
If you add or remove system memory, or change the server size setting minimize/balance/maximize), Windows doesn't automatically tune the Server service for your new configuration. For example, if you run
NET CONFIG SRV /SRVCOMMENT, and then add more memory to the computer, Windows doesn't increase the calculated value of autotuned entries.
Typing NET CONFIG SERVER at the cmd prompt without additional parameters leaves auto tuning intact while displaying useful configuration information about the server.
More information
The Server service supports information levels that let you set each parameter individually. For example, the command NET CONFIG SRV /HIDDEN uses information level 1016 to set just the hidden parameter. However, NET.EXE queries and sets information levels 102 (hidden, comment, users, and disc parameters) and 502. As a result, all parameters in the information level get permanently set in the Registry. SRVMGR.EXE and the Control Panel Server query and set only level 102 (not level 502) when you change the server comment.
Administrators wishing to hide Windows computers from the browse list or change the autodisconnect value should make those specific changes using REGEDT32.EXE instead of the command-line equivalents discussed above. The server comment can be edited using the description field of the Control Panel Server applet or Server Manager. restore the LAN Manager Server parameters to the defaults, or to reconfigure Windows so that it auto-configures the Server service:
Run Registry Editor (REGEDT32.EXE).
From the HKEY_LOCAL_MACHINE subtree, go to the following key:
\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters
Remove all entries except the following:
EnableSharedNetDrives
Lmannounce
NullSessionPipes
NullSessionShares
Size
Note
You may have other entries here that are statically coded. Do not remove these entries.
Quit Registry Editor and restart Windows. | https://docs.microsoft.com/en-US/troubleshoot/windows-server/performance/server-service-configuration-tuning | 2022-08-08T05:44:19 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.microsoft.com |
Windows Server 2012 R2 Update.
Obtaining the update).
Release notes, system requirements, deprecated features, and related release documentation for Windows Server 2012 R2 also apply to Windows Server 2012 R2 Update. See Install and Deploy Windows Server 2012 R2 and Windows Server 2012 for those topics.
Note
To confirm the exact version of Windows Server 2012 R2 that is installed on a computer, run Msinfo32.exe. If Windows Server 2012 R2 Update is installed, the value reported for Hardware Abstraction Layer will be 6.3.9600.17031.
Changes included in the update (see Desktop Office 365 services by using an email address instead of a UPN. This change does not affect the Active Directory schema. For more information, see Configuring Alternate Login ID.
The update includes all other updates released since Windows Server 2012 R2 was released.
See also
What's new in Windows 8.1 Update and Windows RT 8.1 Update
Install and Deploy Windows Server 2012 R2 and Windows Server 2012
Desktop Experience Overview | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn645472(v=ws.11)?redirectedfrom=MSDN | 2022-08-08T05:42:58 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.microsoft.com |
Tutorial: Display a YouTube Channel
In this tutorial, you will learn how web entities work by creating one displaying a YouTube channel. You can watch videos in Vircadia using this web entity.
On This Page:
Prerequisites
Consider getting familiar with the following concepts before starting this tutorial:
Create a Web Entity
A web entity is a flat object on which you can view any website of your choosing. A web entity lets you access the internet from inside your domain.
To create a web entity:
In Interface, pull up your HUD or Tablet and go to Create.
Click the 'Web' icon to create a web entity. By default, a web entity always displays Vircadia's home page.
Note
Currently, only 20 web entities can run at the same time in a domain to avoid performance issues.
Display Vircadia's YouTube Channel
You can make the web entity display Vircadia's YouTube channel.
In Interface, pull up your HUD or Tablet and go to Create.
Select your web entity and go to the 'Properties' tab.
Scroll down until you see the 'Source URL' option. Enter the Vircadia YouTube channel URL:. You should see the new page as soon as you hit 'Enter'.
See Also | https://docs.vircadia.com/create/entities/display-youtube.html | 2022-08-08T03:19:28 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.vircadia.com |
The URL can be used to link to this page
Your browser does not support the video tag.
Item 3 - Reso to Continue the Existence of a Local Emergency within the City of Poway Due to the Novel Coronavirus (COVID-19) Global Pandemic
November 2, 2021, Item #3""6F Pc 5if (l '2) AGENDA RE PO RT City of Poway ~ ""': _,., -'''! ,~ 11!l ''~ CITY cou NCI L DATE: TO: FROM: CONTACT: SUBJECT: summary: November 2, 2021 Honorable Mayor and Memberswf the Cl Council Scott Post, Interim Fire Chief Scott Post, Interim Fire Chief (858)668-4462 or [email protected] Resolution to Continue the Existence of a Local Emergency Within the City of Poway Due to the Novel Coronavirus (COVID-19) Global Pandemic The Novel Coronavirus (COVID-19) global pandemic continues to cause unprecedented impacts on all economic and social segments of the United States. Federal, state, and county directives, mandates and orders to prevent, control and manage the spread of COVID-19 have, and continue to, impact Poway residents, businesses and visitors. On March 18, 2020, the City Council approved a resolution proclaiming a local emergency. The adopted resolution requires the City Council to either continue the emergency action or declare the emergency ended at each regular meeting. Recommended Action: It is recommended that the City Council approve a resolution continuing the Proclamation of Local Emergency authorizing the City Manager to take necessary actions to protect the public and welfare of the City from the serious and imminent threat of COVI D-19. This action requires a four-fifths (4/5) vote of the City Council. Discussion: The COVID-19 global pandemic continues to impact business, education, healthcare, military, and social segments of the United States. COVID-19 has resulted in a swift economic slowdown and high unemployment rates. Federal, state, and county directives, mandates, orders and guidelines have been issued to prevent, control and manage the spread of COVID-19. While efforts are focused on re-opening the economy and relaxing restrictions, impact on Poway residents, businesses and visitors continues. In response to the COVID-19 global pandemic, and its impact on Poway, the City Manager, serving as Director of Emergency Services for the City of Poway, proclaimed a local emergency on March 13, 2020. On March 18, 2020, the City Council approved Resolution No. 20-013 ratifying the City Manager's Proclamation of Local Emergency. The City Council approved to continue the emergency action in 2020 on April 7, April 21, May 5, May 19, June 2, June 16, July 7, July 21, August 4, September 1, September 15, October 6, October 20, November 17, December 1, and December 15. And in 2021, the City Council approved to continue the emergency action on January 19, February 2, February 16, March 2, March 16, April 6, April 20, May 4, May 18, June 1, June 1 S, July 20, August 3, 1 of 5 November 2, 2021, Item #3August 17, September 7, October 5, and October 19. The adopted resolution requires the City Council to either continue the emergency action or declare the emergency ended at each regular meeting. Environmental Review: This action is not subject to ·review under the California Environmental Quality Act (CEQA). Fiscal Impact: As of October 18, 2021, City costs to respond to COVID-19 are estimated at $1,038,744. These costs specifically relate to FEMA eligible expenditures. The total fiscal impact is unknown at this time. Per the City Council adopted General Fund Reserve policy, the City maintains a General Fund Reserve of 45 percent of the budgeted annual General Fund operating expenditures, or $19,233,004 as of June 30, 2020, net of the $2,000,000 used from the Extreme Events/Public Safety reserve the City Council approved on April 16, 2020 to fund the Poway Emergency Assistance Recovery Loan (PEARL) program. The PEARL program is discussed in more detail below. Within that 45 percent, $12,142,455 is set aside for Extreme Events/Public Safety. Based upon the City's reserve policy, there are adequate reserves to cover the costs to respond to this health emergency. Further, staff believes some of the costs are recoverable under State and Federal Disaster programs. Amounts recovered under these programs will be used to replenish the General Fund reserve. Pursuant to the reserve policy, staff will return with a plan to replenish any General Fund reserves not replenished under a State or Federal Disaster program. Staff will recommend applicable budget adjustments prior to the completion of the current fiscal year. In addition to using reserves to respond to COVID-19, as mentioned above, on April 20, 2020, the City Council approved $2,000,000 to fund the PEARL program for small businesses. The PEARL program's goal is to offer financial assistance to small businesses located in Poway by complementing existing state and federal loan programs and to provide a financial bridge to businesses to survive the current emergency. The PEARL program provides loans of up to $50,000 to eligible businesses. As of October 18, 2021, staff has received 80 applications requesting $2,912,165 in loans. Based on staffs review, 35 loans totaling $1,385,326 have been approved and three loans totaling $120,000 have been repaid. The reserve fund will be replenished from the repayment of PEARL loans over a three-year period following the end of the local COVID-19 emergency. This period falls within the General Fund Reserve Policy's direction to fully replenish reserves within five years of use. Public Notification: None. Attachments: A. Resolution B. Proclamation of Local Emergency Reviewed/Approved By: Wendy serman Assistant City Manager 2 of 5 Reviewed By: Alan Fenstermacher City Attorney November 2, 2021, Item #3RESOLUTION NO. 21-A RESOLUTION OF THE CITY COUNCIL OF THE CITY OF POWAY, CALIFORNIA, FINDING AND DECLARING THE CONTINUED EXISTENCE OF AN EMERGENCY WITHIN THE CITY DUE TO THE NOVEL CORONAVIRUS (COVID-19) GLOBAL PANDEMIC WHEREAS, the Novel Coronavirus (COVID-19) global pandemic in the City of Poway, commencing on or about January 24, 2020 that creates a threat to public health and safety; WHEREAS, Government Code section 8630 and Poway Municipal Code (PMC) Section 2.12.060 empower the City Manager, acting as the Director of Emergency Services, to proclaim the existence of a local emergency when the City is affected by a public calamity, and the City Council is not in session; WHEREAS, on March 13, 2020, the City Manager, acting pursuant to Government Code section 8630 and PMC section 2.12.060, proclaimed the existence of a local emergency based on conditions of extreme peril to the health and safety of persons caused by the Novel Coronavirus (COVID-19) global pandemic; WHEREAS, on March 18, 2020, the City Council, acting pursuant to Government Code section 8630 and PMC section 2.12.065, ratified the existence of a local emergency within seven (7) days of a Proclamation of Local Emergency by the City Manager; WHEREAS, the City Council, acting pursuant to PMC section 2.12.065, approved extending the emergency declaration during regularly scheduled meetings in 2020 on April 7, April 21, May 5, May 19, June 2, June 16, July 7, July 21, August 4, September 1, September 15, October 6, October 20, November 17, December 1, December 15, and in 2021 on January 19, February 2, February 16, March 2, March 16, April 6, April 20, May 4, May 18, June 1, June 15, July 20, August 3, August 17, September 7, October 5 and October 19; WHEREAS, Public Contract Code Section 20168 provides that the City Council may pass by four-fifths (4/5) vote, a resolution declaring that the public interest and necessity demand the immediate expenditure of public money to safeguard life, health, or property; WHEREAS, upon adoption of such resolution, the City Manager may expend any sum required in the emergency and report the same to the City Council in accordance with Public Contract Code Section 22050; WHEREAS, if such expenditure is ordered, the City Council shall review the emergency action at each regular meeting, to determine if there is a need to continue the action or if the Proclamation of Local Emergency may be terminated; and WHEREAS, such the Novel Coronavirus (COVID-19) global pandemic constitute an emergency within the terms of Public Contract Code Sections 20168 and 22050 which requires that the City Manager be able to act quickly and without complying with the notice and bidding procedures of the Public Contract Code to safeguard life, health, or property. NOW, THEREFORE, BE IT RESOLVED by the City Council of the City of Poway hereby finds and declares: 3 of s ATTACHMENT A November 2, 2021, Item #3Resolution No. 21-Page 2 SECTION 1: An emergency continues to exist within the City as the result of the Novel Coronavirus (COVID-19) global pandemic; and (a) The continuing threat of the Novel Coronavirus (COVID-19) global pandemic requires that the City be able to expend public money in order to safeguard life, health, or property; (b} The City Manager, as the City's Personnel Officer, is authorized to take actions necessary to alter employee leave policies and ensure a safe and healthy workforce; (c) The City Manager is authorized to safeguard life, health, or property without complying with notice or bidding procedures; and (d) Once such expenditure is made, the City Manager shall report the conditions to the City Council at each regular meeting, at which time the City Council shall either continue the emergency action or declare the emergency ended. SECTION 2: This Proclamation of Local Emergency and all subsequent resolutions in connection herewith shall require a four-fifths (4/5) vote of the City Council. 4of 5 November 2, 2021, Item #3PROCLAMATION OF LOCAL EMERGENCY WHEREAS, section 2.12.060 of the Poway Municipal Code empowers the Director of Emergency Services to proclaim the existence or threatened existence of a local emergency when the City is affected or likely to be affected by a public calamity and the City Council is not in session; WHEREAS, the City Manager, as Director of Emergency Services of the City of Poway, does hereby find that conditions of extreme peril to the safety of persons and property have arisen within the City of Poway, caused by the Novel Coronavirus (COVID-19) commencing on January 24, 2020; WHEREAS, that the City Council of the City of Poway is not in session and cannot immediately be called into session; and WHEREAS, this Proclamation of Local Emergency will be ratified by the City Council within seven days of being issued. NOW, THEREFORE, IT IS HEREBY PROCLAIMED by the Director of Emergency Services for the City of Poway, that a local emergency now exists throughout the City and that said local emergency shall be deemed to continue to exist until its termination is proclaimed by the City Council; IT IS FURTHER PROCLAIMED AND ORDERED that during the existence of said local emergency the powers, functions, and duties of the emergency organization of this City shall be those prescribed by state law, ordinances, and resolutions of this City, and by the City of Poway Emergency Plan; and IT IS FURTHER PROCLAIMED AND ORDERED that a copy of this Proclamation of Local Emergency be forwarded to the State Director of the Governor's Office of Emergency Services with a request that; 1. The State Director find the Proclamation of Local Emergency acceptable in accordance with provisions of the Natural Disaster Assistance Act; and 2. The State Director forward this Proclamation, and request for a State Proclamation and Presidential Declaration of Emergency, to the Governor of California for consideration and action. PASSED AND ADOPTED by the Director of Emergency Services for the City of Poway this 13th day of March 2020. Director of Emergency Services 5 ofS ATTACHMENT B | https://docs.poway.org/WebLink/DocView.aspx?id=164345&dbid=0&repo=CityofPoway | 2022-08-08T04:36:11 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.poway.org |
LoadBase¶
- class geojson_modelica_translator.model_connectors.load_connectors.load_base.LoadBase(system_parameters, geojson_load)¶
Base class of the load connectors.
- __init__(system_parameters, geojson_load)¶
Base class for load connectors.
- Parameters
system_parameters – SystemParameter object, the entire system parameter file which will be used to generate this load.
geojson_load – dict, the GeoJSON portion of the load to be added (a single feature). This is now a required field.
Methods
Attributes
- add_building(urbanopt_building, mapper=None)¶
Add building to the load to be translated. This is simply a helper method.
- Parameters
urbanopt_building – an urbanopt_building (also known as a geojson_load)
mapper – placeholder object for mapping between urbanopt_building and load_connector building.
- | https://docs.urbanopt.net/geojson-modelica-translator/_autosummary/geojson_modelica_translator.model_connectors.load_connectors.load_base.LoadBase.html | 2022-08-08T05:10:41 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.urbanopt.net |
]
Removes the association between a specified Resolver rule and a specified VPC.
Warning
If you disassociate a Resolver rule from a VPC, Resolver stops forwarding DNS queries for the domain name that you specified in the Resolver rule.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
disassociate-resolver-rule --vpc-id <value> --resolver-rule-id <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--vpc-id (string)
The ID of the VPC that you want to disassociate the Resolver rule from.
--resolver-rule-id (string)
The ID of the Resolver rule that you want to disassociate from the specified VPC.
- disassociate a Resolver rule from an Amazon VPC
The following
disassociate-resolver-rule example removes the association between the specified Resolver rule and the specified VPC. You can disassociate a rule from a VPC in the following circumstances:
For DNS queries that originate in this VPC, you want Resolver to stop forwarding queries to your network for the domain name that is specified in the rule.
You want to delete the forwarding rule. If a rule is currently associated with one or more VPCs, you must disassociate the rule from all VPCs before you can delete it.
aws route53resolver disassociate-resolver-rule \ --resolver-rule-id rslvr-rr-4955cb98ceexample \ --vpc-id vpc-304bexam
Output:
{ "ResolverRuleAssociation": { "Id": "rslvr-rrassoc-322f4e8b9cexample", "ResolverRuleId": "rslvr-rr-4955cb98ceexample", "Name": "my-resolver-rule-association", "VPCId": "vpc-304bexam", "Status": "DELETING", "StatusMessage": "[Trace id: 1-5dc5ffa2-a26c38004c1f94006example] Deleting Association" } }
ResolverRuleAssociation -> (structure)
Information about the
DisassociateResolverRulerequest, including the status of the request.
Id -> (string)The ID of the association between a Resolver rule and a VPC. Resolver assigns this value when you submit an AssociateResolverRule request.
ResolverRuleId -> (string)The ID of the Resolver rule that you associated with the VPC that is specified by
VPCId.
Name -> (string)The name of an association between a Resolver rule and a VPC.
VPCId -> (string)The ID of the VPC that you associated the Resolver rule with.
Status -> (string)A code that specifies the current status of the association between a Resolver rule and a VPC.
StatusMessage -> (string)A detailed description of the status of the association between a Resolver rule and a VPC. | https://docs.aws.amazon.com/cli/latest/reference/route53resolver/disassociate-resolver-rule.html | 2022-08-08T05:54:05 | CC-MAIN-2022-33 | 1659882570765.6 | [] | docs.aws.amazon.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.