content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Managing Application Versions
Elastic Beanstalk creates an application version whenever you upload source code. This usually occurs when you create a new environment or upload and deploy code using the environment management console or EB CLI. You can also upload a source bundle without deploying it from the application management console.
To create a new application version
Open the Elastic Beanstalk console.
Choose an application.
In the navigation pane, choose Application Versions.
Choose Upload.
Enter a label for this version in the Version label field.
(Optional) Enter a brief description for this version in the Description field.
Choose Browse to specify the location of the source bundle.
Note
The file size limit is 512 MB.
Choose Upload.
The file you specified is associated with your application. You can deploy the application version to a new or existing environment.
Over time, your application can accumulate a large number of application versions. To save storage space and avoid hitting the application version limit, you can configure Elastic Beanstalk to delete old versions automatically.
Note
Deleting an application version does not have any affect on environments currently running that version.
To delete an application version
Open the Elastic Beanstalk console.
Choose an application.
In the navigation pane, choose Application versions.
In the list of application versions, select the check box next to the application version that you want to delete, and then click Delete.
(Optional) To leave the application source bundle for this application version in your Amazon S3 bucket, uncheck Delete versions from Amazon S3.
Choose Apply.
Lifecycle settings are applied when you create new application versions. For example, if you configure a maximum of 25 application versions, Elastic Beanstalk deletes the oldest version when you upload a 26th version. If you set a maximum age of 90 days, any versions more than 90 days old are deleted when you upload a new version.
If you don't choose to delete the source bundle from Amazon S3, Elastic Beanstalk deletes the version from it's records, but the source bundle is left if your Elastic Beanstalk storage bucket. The application version limit only applies to versions Elastic Beanstalk tracks, so you can delete versions to stay within the limit, but retain all source bundles in Amazon S3 if needed. | http://docs.amazonaws.cn/en_us/elasticbeanstalk/latest/dg/applications-versions.html | 2017-07-20T18:41:16 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.amazonaws.cn |
Definicions del glosari From Joomla! Documentation Revision as of 13:57, 6 March 2014 by El libre (Talk | contribs) (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Other languages: العربية • català • dansk • English • español The articles in this category will be shown on the Glossary page. Only add pages to this category if they are suitable for the Glossary. These pages must be in the Chunk namespace. Pages in category ‘Glossary definitions/ca’ The following 7 pages are in this category, out of 7 total. A Chunk:Alias/caC Chunk:Core/caM Chunk:Model-View-Controller/ca P Chunk:Patch/ca Chunk:Plugin/caS Chunk:Split menus/ca T Chunk:Template/ca Retrieved from ‘’ Category: Glossary | https://docs.joomla.org/index.php?title=Category:Glossary_definitions/ca&oldid=113086 | 2015-04-18T12:17:43 | CC-MAIN-2015-18 | 1429246634331.38 | [] | docs.joomla.org |
Screen.trashmanager.15 From Joomla! Documentation Revision as of 19:06, 24 March 2008 by Drmmr763 (Talk | contribs) (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) See Discussion Contents 1 How to access 2 Description 3 Screenshot 4 Icons 5 Quick Tips 6 Related information How to access Description Screenshot Icons Quick Tips Related information Retrieved from ‘’ | https://docs.joomla.org/index.php?title=Help15:Screen.trashmanager.15&oldid=4000 | 2015-04-18T12:19:48 | CC-MAIN-2015-18 | 1429246634331.38 | [] | docs.joomla.org |
Difference between revisions of "Components Newsfeeds Feeds"
From Joomla! Documentation
Latest revision as of 18:38, 28 April 2013
Contents
How to Access
Select Components → News Feeds → Feeds from the drop-down menu on the back-end of your Joomla! installation, or select the "Feeds" link from the News Feeds Manager - Categories.
Description
The News Feed Manager screen allows you to add to which this News Feed belongs. The Access Level assigned to the feed.
- Feed Count The number of articles included in the feed.
- Cache Time. The number of seconds for which to cache the feed locally. It can safely be left at the default.
- Language The language of..:
The selections may be combined. Only items matching both access level.
- Select Language. Use the drop-down list box to select the language.
Quick Tips
- You need to add at least one Category for news feeds before you add the first feed. Categories are added using the Category Manager by clicking on 'News Feeds', and then on 'Categories' in the 'Components' menu.
Related Information
- To create or Edit News Feeds: News Feeds Manager - New/Edit
- To work with News Feed Categories: Category Manager (News Feeds) | https://docs.joomla.org/index.php?title=Help16:Components_Newsfeeds_Feeds&diff=85556&oldid=28435 | 2015-04-18T12:40:17 | CC-MAIN-2015-18 | 1429246634331.38 | [] | docs.joomla.org |
Difference between revisions of "Extensions Template Manager Templates Edit"
From Joomla! Documentation
Revision as of 16:39, 13 Oct> | https://docs.joomla.org/index.php?title=Help32:Extensions_Template_Manager_Templates_Edit&diff=next&oldid=83848 | 2015-04-18T12:38:59 | CC-MAIN-2015-18 | 1429246634331.38 | [] | docs.joomla.org |
- Alerts and Monitoring >
- Alerts >
- Alert Resolutions >
- Inconsistent Backup Configuration
Inconsistent Backup Configuration¶
Description¶
Cloud Manager has detected that the configuration for a backup does not match the configuration of the MongoDB deployment.
As some settings affect the on-disk format or the process of applying oplog, the backup process encourages users to verify that their backup configurations are consistent with their deployed configurations. This alert triggers if no node in your deployment exactly matches your backup configuration.
Common Triggers¶
- The storage engine of a deployment has changed since backup was started.
- The MongoDB deployment has changed its startup options since backup was started. The startup options that you should ensure match your backup are:
nssize
directoryperdb
smallfiles
wiredTigerDirectoryForIndexes
wiredTigerBlockCompressor
Possible Solutions¶
- Update the storage engine from the Backup dashboard. This triggers a resync of the backup.
- If the startup options are inconsistent, resync the backup manually from the Backup dashboard. | https://docs.cloudmanager.mongodb.com/reference/alerts/inconsistent-backup/ | 2017-08-16T23:23:11 | CC-MAIN-2017-34 | 1502886102757.45 | [] | docs.cloudmanager.mongodb.com |
AWS IAM Policy¶
On this page
Overview¶
When Cloud Manager deploys and manages MongoDB instances on AWS infrastructure, Cloud Manager accesses AWS by way of a user’s access keys. The user associated with the keys must have an attached IAM policy with the following permissions. For information on attaching the policy, see Provision Servers.
For an overview of AWS IAM policies, see Amazon’s IAM policy documentation.
Example Policy¶
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["iam:*AccessKey*", "iam:GetUser"], "Resource": ["*"] }, { "Effect": "Allow", "Action": [ "ec2:AttachVolume", "ec2:AuthorizeSecurityGroupIngress", "ec2:CreateKeyPair", "ec2:CreateSecurityGroup", "ec2:CreateTags", "ec2:CreateVolume", "ec2:DeleteKeyPair", "ec2:DeleteSecurityGroup", "ec2:DeleteTags", "ec2:DeleteVolume", "ec2:DescribeAccountAttributes", "ec2:DescribeAvailabilityZones", "ec2:DescribeInstanceAttribute", "ec2:DescribeInstanceStatus", "ec2:DescribeInstances", "ec2:DescribeKeyPairs", "ec2:DescribeRegions", "ec2:DescribeSecurityGroups", "ec2:DescribeSubnets", "ec2:DescribeTags", "ec2:DescribeVpcs", "ec2:DescribeVpcAttribute", "ec2:DescribeVolumeStatus", "ec2:DescribeVolumes", "ec2:DescribeVolumeAttribute", "ec2:ImportKeyPair", "ec2:RunInstances", "ec2:StartInstances", "ec2:StopInstances", "ec2:RebootInstances", "ec2:TerminateInstances" ], "Resource": [ "*" ] } ] }
Policy Settings¶
The following table explains why each setting is required. Cloud Manager uses
permissions provided by the customer only for CRUD actions on the resources
Cloud Manager creates for the customer. Additionally, Cloud Manager performs only
Read
actions for resources the customer selects (VPC, subnet, etc) and for
connected resources (network ACL, route table, etc). | https://docs.cloudmanager.mongodb.com/reference/required-permissions-aws-user/ | 2017-08-16T23:22:10 | CC-MAIN-2017-34 | 1502886102757.45 | [] | docs.cloudmanager.mongodb.com |
Search as You Type
-
Disabling the 'Search as You Type' functionality
-
-
-
Modifying the Searching Criteria
Change the Label Text of the Search Panel
Add Search Criteria Programmatically type the operator is set to Contains. For all other types the operator is set to IsEqualTo.
[XAML] Example 1: Showing the Search Panel
<telerik:RadGridView
Figure 1: Showing the Search Panel.
[XAML] Example 2: Disabling the Search Panel
<telerik:RadGridView.
[XAML] Example 3: Setting the IsSearchingDeferred to True
<telerik:RadGridView
Commands
Two new commands have been exposed for the text search functionality.
Search: Executed in order to show the search panel.:
[C#] Example 4: Clearing search criteria on SearchPanelVisibilityChanged
public MainWindow() { InitializeComponent(); this.RadGridView.SearchPanelVisibilityChanged += RadGridView_SearchPanelVisibilityChanged; }()); } }
[VB.NET] Example 4: Clearing search criteria on SearchPanelVisibilityChanged
Public Sub New() InitializeComponent() AddHandler Me.RadGridView.SearchPanelVisibilityChanged, AddressOf RadGridView_SearchPanelVisibilityChanged End Sub. | http://docs.telerik.com/devtools/wpf/controls/radgridview/features/search-as-you-type | 2017-08-16T23:40:03 | CC-MAIN-2017-34 | 1502886102757.45 | [array(['images/gridview-textsearch-showsearchpanel.png',
'Showing the Search Panel'], dtype=object) ] | docs.telerik.com |
Visit the Setup application
The Setup application helps you learn about navigation and typing, change options to personalize your BlackBerry smartphone, and set up network connections, such as Bluetooth connections. You can also set up email addresses and social networking accounts. The Setup application should appear automatically the first time that you turn on your smartphone.
- If the Setup application does not appear automatically, on the home screen or in a folder, click the Setup icon.
- Click a section to change options or to complete a short series of prompts that help you set the options.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/47918/1591061.jsp | 2014-12-18T02:44:45 | CC-MAIN-2014-52 | 1418802765584.21 | [] | docs.blackberry.com |
Tutorials
Feature Guides
AJAX
Maven Support
- Maven Jetty Plugin
- Maven Jetty JSP Compilation Plugin
Glassfish
JBoss
Useful Servlets and Filters
Integrations
- DWR
- JIRA
- ActiveMQ
- Jetspeed2
- Atomikos Transaction Manager
- JOTM
- Bitronix Transaction Manager
- MyFaces
- JSF Reference Implementation
- Jakarta Slide
- Jetty with Spring
- Jetty with XBean
- Useful web developer tools
- Maven web app project archtypes - from Webtide | http://docs.codehaus.org/pages/viewpage.action?pageId=67550 | 2014-12-18T02:33:54 | CC-MAIN-2014-52 | 1418802765584.21 | [] | docs.codehaus.org |
<groovyc>
Description
Compiles Groovy source files and, if the joint compilation option is used, Java source files.
Required taskdef
Assuming groovy-all-VERSION.jar is in my.classpath you will need to declare this task at some point in the build.xml prior to the groovyc task being invoked.
<groovyc> Attributes
Notes: Joint compilation is only available since 1.1-beta-2, jointCompilationOptions is no longer supported, use the nested javac instead
<groovyc> Nested Elements
Notes:
- For path structures see for example
- For usages of the javac task see
- The nested javac task behaves more or less as documented for the top-level javac task. srcdir, destdir, classpath, encoding for the nested javac task are taken from the enclosing groovyc task. If these attributes are specified then they are added, they do not replace. In fact, you should not attempt to overwrite the destination. Other attributes and nested elements are unaffected, for example fork, memoryMaximumSize, etc. may be used freely.
Joint Compilation
Joint compilation means that the Groovy compilation will parse the Groovy source files, create stubs for all of them, invoke the Java compiler to compile the stubs along with Java sources, and then continue compilation in the normal Groovy compiler way. This allows mixing of Java and Groovy files without constraint.
To invoke joint compilation with the jointCompilationOptions attribute, you have to simulate the command line with compiler switches. -j enables the joint compilation mode of working. Flags to the Java compiler are presented to the Groovy compiler with the -F option. So, for example, flags like nowarn are specified with -Fnowarn. Options to the Java compiler that take values are presented to the Groovy compiler using -J options. For example -Jtarget=1.4 -Jsource=1.4 is used to specify the target level and source level. So a complete joinCompilationOptions value may look like: "-j -Fnowarn -Jtarget=1.4 -J-source=1.4". Clearly, using this way of specifying things is a real nuisance and not very Ant-like. In fact there are thoughts to deprecate this way of working and remove it as soon as is practical.
The right way of working is, of course, to use a nested tag and all the attributes and further nested tags as required.. Here is an example:
To restate: the javac task gets the srcdir, destdir and classpath from the enclosing groovyc task. | http://docs.codehaus.org/display/GROOVY/The+groovyc+Ant+Task | 2014-12-18T02:40:15 | CC-MAIN-2014-52 | 1418802765584.21 | [] | docs.codehaus.org |
Installation » Friendly URLs
FURLs
What are Friendly URLs?
When we say friendly URLs what we really mean is 'search engine friendly URLs'. Search engines generally do not do a good job at spidering sites where pages are all something like index.php?id=123. To combat this, we need to generate a different name for each page.
This needs to be handled from two ends:
- we need to get etomite to generate links which use a different name per page, rather than using a parameter to indentify the page.
- we need to get the web server to redirect these page names back to etomite's parser (index.php) rather than just reporting that the file doesn't exist.
Etomite provides an inbuilt mechanism for generating a different name for each page, so that helps us deal with the first requirement.
We also need to configure the webserver, and the first webserver to permit this sort of redirection was the open-source Apache web server. Etomite comes with a standard file that can be used to enable the functionality we need on most Apache server configurations. The Apache mod-rewrite module is the component that performs the magic of transforming a request from one form to another, and the file that controls this is the .htaccess file.
We need to deal with both of these - if we don't generate the links in the first place, nothing will ever try to use them; but if we don't have a server process that can handle them, they'll never end up delivering the correct page.
For those not using Apache web servers, there are a couple of other options that are covered in outline later in this section, but first we'll cover the basics with the tools supported directly by etomite.
Planning
Friendly URLs are an option that can be enabled using the Etomite manager. There are three options that can be set once friendlu URLs are enabled:
- prefix - a short string that goes in front of the document id when making the page name
- suffix - a short string that goes at the end of the document id, usually something like ".htm"
- alias - you can use the document alias rather than the id (and this will be combined with any prefix and suffix you have defiined)
It is best to use aliasses, because these can be meaningful names. Putting a meaningful name in the page name can help in getting better search engine results, as it indicates quite strongly what a page is about. Search engines may take the hint!
It is also a good idea to use a suffix such as ".htm". It makes it easier to write a more precise matching rule in the Apache .htaccess file.
The prefix is generally of limited use, but might be used if for some reason you can't use aliasses. (There is also another special case where the prefix may be used that is covered in the Friendly URLs - other ways subsection.)
Using Aliases
You can assign an alias to a document, such as `sitemap`, and use that as URI: yoursite/sitemap will point to the page you aliased. Make sure you don't give more than one document the same alias, that you don't use a number as an alias, and that the alias is not the name of an existing folder. It may seem obvious to use 'home' as the alias for your home page - resist the temptation and use index instead (this will be better for search engines!) | http://docs.etomite.org/installation-furls.html | 2009-07-02T20:27:45 | crawl-002 | crawl-002-021 | [] | docs.etomite.org |
. On some servers (Sun Fire V215, V245, and V445), DHCP is enabled by default on the network management port. This allows an administrator network access to ALOM without first requiring a serial connection to the serial management port. To be secure by default, there are specific steps and constraints for the initial login through the network.
However, if you want to customize ALOM for your installation, you must perform some basic tasks.
Here are the tasks you must complete to customize ALOM:
1. Plan how to customize your configuration. See Planning Your ALOM Configuration.
2. Use the configuration worksheet to record your settings. See Configuration Variable Worksheet.
3. Power on your host server. See Powering On Your Host Server.
4. Run the setupsc command. See Setting Up ALOM.
5. Use the configuration variables to customize the ALOM software. See To Use Configuration Variables in the ALOM Command Shell.
Explanations of the listed tasks follow.
ALOM software comes preinstalled on your host server and is ready to run when you apply power to the server. You only need to follow the directions in this section if you decide to change the default configuration of ALOM to customize it for your installation.
Before you run the setupsc command, you must decide how you want ALOM to manage your host server. You must make the following decisions about your configuration:
Once you make those decisions, print the configuration worksheet shown in Configuration Variable Worksheet, and use it to record your responses to the setupsc command prompts.
The ALOM hardware contains two types of communication ports:
Both ports give you access to the ALOM command shell. By default ALOM communicates through the SERIAL MGT port at startup. All initial configuration must de done through the serial management port on the Sun Fire V210, V240, V250, and V440 servers and Netra 210, 240, 440 servers. Some servers (Sun Fire V215, V245, and V445) support DHCP by default on the network management port. These servers can be configured from the serial management port or network management port, if the attached subnet has a DHCP server. The default network configuration allows a Secure Shell session to be started.
You can connect to the ALOM serial management port with an ASCII terminal. This port is not an all-purpose serial port; it can be used to access ALOM and the server console through ALOM. On the host general-purpose serial port. However, the Solaris Operating System sees this port as ttya.
If you want to use a general-purpose serial port with your server, use the regular
7-pin serial port on the back panel of your server. The Solaris Operating System. See Ethernet port enables you to access ALOM from within your company network. You can connect to ALOM remotely using any standard Telnet client with Transmission Control Protocol/Internet Protocol (TCP/IP) or Secure Shell (ssh). On your host server, the ALOM Ethernet port is referred to as the NET MGT port.
The network management port is disabled by default on the Sun Fire V210, V240, V250, and V440 servers and Netra 210, 240, and 440 servers. It is enabled by default on the Sun Fire V215, V245, and V445 servers to support DHCP.
Refer to your server's documentation for more information on hardware capability.
When Dynamic Host Configuration Protocol is enabled, the SC acquires its network configuration, such as IP address, automatically from a DHCP server. DHCP is enabled by default on Sun Fire V215, V245, and V445 servers. It is disabled by default on all other servers and must be manually configured.
DHCP enabled-by-default allows a network connection to be established to the SC without first requiring a serial connection to manually configure the network. To make best use of this feature, the administrator must be aware of the associated default configuration variables and default parameters for the DHCP server and for log in to the SC.
The following ALOM variables and the default contents support DHCP on-by-default:
A DHCP client, in this case the SC, provides a unique client identifier (clientid) to identify itself to the DHCP server. The clientid is based on a system property easily obtainable by an authorized administrator with physical access to the system. Once a clientid is determined, the DHCP server can be preconfigured to map the clientid to a known IP address. After the SC is assigned an IP address, it starts the SSH server. An administrator can then initiate an ssh session with the SC. If the system is brand-new out-of-box, or upon reboot after the setdefaults -a command is run, the default admin user account requires a default password to log in. The default password is also composed of a system property that is easily obtainable by an administrator with physical access to the system. The next two sections show how clientid and default password can be constructed.
The clientid is based on the base Ethernet address for the system. The base Ethernet address is available on the Customer Information Sheet that is delivered with each system and is also available on a label on the back panel of the system chassis. The clientid is composed of the following concatenation:
SUNW,SC=base-ethernet-address
For example, if the base-ethernet-address is 08:00:20:7C:B4:08, then the clientid that the SC generates is the string prefix SUNW,SC= concatenated with the 12-digit base-ethernet-address minus the colons:
SUNW,SC=0800207CB408
This clientid is in ASCII format. It should be possible to program the DHCP server with an ASCII clientid. The actual entry into the DHCP mapping table is the hexadecimal equivalent.
When a system is shipped new from the factory, or upon reboot after a setdefaults -a command, a default password is required to log in from an ssh session. The default password is unique for each system. It is derived from the chassis serial number. The chassis serial number can be found on the Customer Information Sheet shipped with each server and can be found on a label attached to the back panel of the chassis. The default password is composed of the last 8 digits of the chassis serial number. For example, if the chassis serial number is 0547AE81D0 then the default password is:
47AE81D0
1. Determine the clientid from the host system base Ethernet address. The base Ethernet address can be obtained from the Customer Information Sheet or label on the back panel of the chassis.
2. Determine the default admin user login password from chassis serial number. The chassis serial number can be obtained from the Customer Information Sheet or label on the back panel of the chassis.
3. Program the DHCP server to serve the new clientid.
4. Attach the Sun Fire V215, V245, or V445 system to the network and ensure the system has AC power.
5. Start the ssh session using the IP address assigned by the DHCP server.
6. Log in as the admin user using the predetermined default password.
If the DHCP server is configured to pull from a block of IP addresses, then the administrator can use a DHCP administrative utility to determine the IP address that was assigned, although it may first be necessary to convert the clientid to a hexadecimal equivalent. For example, if the DHCP server is running the Solaris OS, then the pntadm(1M) command can be used to display the IP address assignments. In the following example, the SC with Ethernet address 123456789012 is connected to the .203 subnet.
In this case it is necessary to convert ASCII to a hexadecimal equivalent clientid to determine the IP address assignment. For example:
53|55|4E|57|2C|53|43|3D|31|32|33|34|35|36|37|38|39|30|31|32
S U N W , S C = 1 2 3 4 5 6 7 8 9 0 1 2 3-2:
FIGURE 3-1 and TABLE 3-3 include information about pin assignments and signal description relevant to an RJ-45 connector.
FIGURE 3-2 and TABLE 3-4 include information about the serial port connector and signals relevant to a DB-25 connector.
For more information, see if_modem.
You only need to use this worksheet if you want to customize ALOM for your installation.
To customize ALOM, you use the configuration variables. See Using ALOM Configuration Variables for details of variables.
There are two ways to set up the configuration variables for ALOM:
Print this section and use the table to record your inputs. This table can also serve as your record of the host server configuration in case you must 3-5 identifies the configuration variables responsible for Ethernet control and their default values. Enter your values in the extreme right column.
When Dynamic Host Configuration Protocol is enabled, the SC acquires its network configuration, such as IP address, automatically from a DHCP server. DHCP is enabled by default on Sun Fire V215, V245, and V445 servers; see Default DHCP Connection (Sun Fire V215, V245, and V445 Servers) for more information. DHCP is disabled by default on all other servers and must be manually configured.
There are two ways to configure DHCP for ALOM:
If you use DHCP to control your network configuration, configure the DHCP server to assign a fixed IP address to ALOM.
There are two ways to manually configure the network for ALOM:
If you set each variable individually, you must set the following variables:
Refer to your host server documentation for information about how to power on the system. If you want to capture ALOM messages, power on the terminal that you have connected to the SERIAL MGT port before powering on the host server.
As soon as power is applied to the host, the SERIAL MGT port connects to the host server's console stream. To switch to ALOM, type #. (pound-period). At startup, ALOM has one pre-configured administrator account admin.
When you switch to ALOM from the system console, you are prompted to create a password for this account. See the password command section in password on for a description of acceptable passwords.
The default admin account has full ALOM user permissions (cuar). For more on permissions, see userperm. You can use this account to view the console output from the host, to set up other user accounts and passwords, and to configure ALOM.
To send email alerts, the ALOM Ethernet). See, see Chapter 6. To configure a function, type y when the setupsc script prompts you to do so. To skip a function, type n.
If you later must change a setting, run the setsc command as described in setsc.
The setupsc script enables you to set up a number of configuration variables at once. See Chapter 6 for more information. If you want to change one or more configuration variables without running the setupsc script, use the setsc command as shown on To Use the setsc Command. | http://docs.sun.com/source/819-2445-11/configure.html | 2009-07-04T12:42:32 | crawl-002 | crawl-002-021 | [] | docs.sun.com |
Authentication
Introduction
For any Particle-powered product to function properly, there are four involved parties:
- Particle
- Your product's applications
- A device
- The customer
Each of the four parties plays a critical role in how your product will function
Understanding how all four interact, manage data, and secure information is critical to launching a successful product on the Particle platform. Specifically, this section of the guide will go deep into decisions you must make on authentication and security with regards to your product. The results of these decisions will impact where and how data is stored as well as the ways in which your product's applications interact with Particle to control devices and manage customers.
Note: Many of the implementation details discussed in this document are still under development. We will add specific code examples and context as they become available in the mobile and JavaScript SDKs.
OAuth
Particle tightly adheres to OAuth specifications to ensure secure and correct access to private data and devices. OAuth is an open standard for authorization, providing client applications "secure delegated access" to server resources on behalf of a resource owner (Wikipedia).
Integral to OAuth is the concept of clients, and client credentials. An OAuth client generally represents an application, like a native iOS application running on an iPhone or a web app running in a browser. As part of the setup process for your product, you will need to create one or more OAuth clients for your product.
Your product's OAuth client will be needed when hitting the Particle API to perform actions that only your product's team members or applications should be able to perform. An example of this is creating a customer that belongs to your product via the Particle API. Only those with a valid OAuth client will be able to perform this action, protecting your product from fake or erroneous new customer accounts.
Your OAuth Client will ensure secure communication between your devices and the Particle cloud
Client credentials are comprised of two pieces of information: A client ID, and a secret. Passing these two pieces of data to the Particle API will allow you to create access tokens, which are described in detail in the next section.
Scopes
Scopes allow you to specify exactly what type of access the client (and tokens created using the client) should have. Scopes are used for security reasons to limit the allowed applications of OAuth client credentials. Depending on which authentication method you choose, you may need to specify a scope when creating your OAuth client. More on scopes.
Creating an OAuth Client
You can use the Particle Console to create and manage your OAuth clients. To get started, click on the Authentication icon in your sidebar.
You can create OAuth clients on behalf of your Particle user (to interact with devices your account has claimed), or on behalf of a Particle product (to interact withe devices in the product fleet).
Scoping the client to a user or product account does impact which devices the client can interact with, and which permissions are available—so please choose mindfully. If you'd like to create an OAuth client for your product, visit the Authentication view within the Console's management context for your product.
To create a new client, click the + New Client button in the top right corner of the screen.
This will launch a modal allowing you to configure an OAuth client that suits your particular use case.
The configuration of your OAuth client will depend both on what type of interface your end-users will use to interact with their Particle devices (i.e. mobile vs. web app) in addition to what authentication method you choose for your product. You can skip to choosing an authentication method if you'd like to create an OAuth client now.
The Console provides an easy way to manage OAuth clients
The Console will provide you with a client ID and secret once you create your client. Your client secret will only be shown once for security purposes. Ensure that you copy it for your records to be used in your mobile or web app.
Never expose your client credentials, especially if the client has full permissions. Credentials are sensitive pieces of information that, if exposed, could allow unauthorized people or applications to interact with your product's data and devices.
Note: You may also manage OAuth clients programmatically, using the Device Cloud REST API.
Access Tokens
A related concept to understand is how Particle uses access tokens for authentication and security. If you have ever logged into the Web IDE, called a function on a Particle device, or read a variable via the API, you are already using access tokens! It is important to note that OAuth credentials are needed to create access tokens.
What's an access token?
An access token is a special code that is tied to a Particle customer or user, that allows reading data from and sending commands to that person's device(s). Any API endpoint that returns private information, or allows control of an individual's device requires an access token. Tokens inherit scopes from the OAuth clients used to create them, and can have more specific scopes of their own.
Here's a specific example. Let's say that on one of your own personal
devices (with device ID
0123456789abcdef01234567), you want to call the
lightsOn function. Here is how you would
do this using the API:
curl -X POST -H "Authorization: Bearer 1234" \ \ -d arg=livingRoom
In order for the API to call the function on your device, you must
include a valid access token for the user that is the owner of the
device. In this case, you include your access token of
1234. Behind
the scenes, the API looks up the access token in our database, checks to
make sure that the user who created the access token is the owner of the
device included in the request, and will only continue if the token has
the proper permissions to call the function.
The easiest way to create tokens is using the Particle CLI and the
particle token create option.
If you have multi-factor authentication (MFA) enabled on your account, you will be required to supply the authentication code when you create the token using your username and password, however the authentication code is not necessary to use the token, so make sure you keep your tokens secure, especially non-expiring tokens.
Customer access tokens
Similarly, your customers will have access tokens that provide verification of ownership of a particular device. As a product creator, you will be creating a web or mobile application for your customers to use your product. As part of your application, you will generate access tokens on behalf of your customers, and use these access tokens to make the desired calls to the Particle API to successfully read data from and control that customer's device.
The access token is linked to the customer and used to control the customer's device(s)
Luckily, the mobile SDKs and ParticleJS will expose helper methods to handle token creation and management, without much/any additional code needed from you or your engineers.
Choosing an Authentication Method
Take a deep breath. We've covered a lot so far and you're picking things up quick! This section will help you determine the best place for you to go next.
As described earlier, end-users of your product are referred to as customers in the Particle ecosystem. As a product creator, you will need to make a strategic decision about how you will want to handle authentication for your customers. Your choice will likely be influenced by how much you would like to hide Particle from your customers, as well as how you would like your product to function.
There are three ways to manage authentication for your customers.
- Simple Authentication: Customers are created and managed directly by Particle. This is the simplest and fastest way to get your connected product app working.
- Two-legged Authentication: You create and manage accounts for your customers yourself, and request scoped access to control customers' devices from Particle. This provides the maximum amount of flexibility and security for your application.
Typically you will use an iOS or Android mobile app to allow your customers to authenticate, setup their devices, and interact with their product. More info
Other techniques such as a web-browser based setup and on-device setup are possible, as well.
When you're ready, click on the authentication method that makes most sense to you.
Simple Authentication
As the title suggests, Simple Authentication is the simplest and most straightforward to implement. This is because in this method, your application will not have its own server/back-end architecture. Instead, your web or mobile app will hit the Particle API directly for both session management and device interactions. Below is a diagram communicating how simple authentication works at a high level:
Your application interacts directly with the Particle API using Simple Authentication
Let's take a simple example. Imagine you are the creator of a smart light bulb that can be controlled via a smartphone app. The customer, or the end-user of the product, uses the mobile app to create an account. Behind the scenes, your mobile app hits the Particle API directly to create a customer. Then the customer goes through the setup process and links their device to their customer account. Again, your mobile app uses the Particle API to successfully claim the device to the customer account. After the device is setup, the customer can toggle a light on and off with the mobile app. This works as your app is able to call functions on the customer's device using the customer's access token.
All of this is able to happen without the need to have your own server. All communication flows from the mobile client to the Particle cloud, then down to the customer's device.
Advantages of Simple Auth
Simple auth is ideal for getting a Particle product up-and-running quickly. Without needing to build your own back-end, development time to creating an app to work with a Particle device is greatly reduced. There are less moving parts and opportunities to introduce bugs. In addition, Particle's mobile SDKs and JavaScript SDK will handle much of the heavy lifting for you when it comes to session management and device interaction. In short, simple auth is...simple.
Another advantage of simple authentication is the ability to hide Particle from your customers. The SDKs allow for front-end skinning and customization that will allow you to create your own brand experience for customers of your app. All interaction with Particle will happen behind the scenes, hidden from your customers (unless they are tech savvy enough to monitor the network traffic to and from your app).
Disadvantages of Simple Auth
Without your own server, you lose some level of flexibility and ability to customize in your application. For instance, if you wanted to store custom information about your customer specific to your application like their name or their favorite pizza topping, this would not be currently supported with simple auth.
In addition, using simple auth would make it more difficult to capture and use historical data about devices and customers' behavior. With your own server and database, you could store data about what time a customer turns on their lights, for example. Using simple auth, this would not be supported.
Simple Auth Implementation
If you choose to go with simple authentication for your web or mobile application, you should get to know the diagram below very well. While a majority of the steps are wrapped by the mobile and JavaScript SDKs, it is still important to grasp how customer authentication, device setup, and device interaction work.
Each one of the steps will be covered in detail below. Note that the first two steps are a one-time configuration process, whereas product setup will occur for each new customer that sets up a device.
The full simple authorization flow. See here for full size diagram
1. Creating OAuth Client Credentials
The first thing you will need to do is ensure that you have created proper OAuth client credentials for your product. In simple authentication, communication will be direct from a client application (web or mobile app) to the Particle API. This is much less secure than server to server communication. As a result, you will create scoped client credentials that will be able to do one thing and one thing only: create new customers for your product.
You will create your OAuth client using the Authentication view in your product's Particle console. For info on how to find the Authentication page and create a client, check out the earlier discussion.
If you are building a mobile app for your Particle product, you should choose Simple Auth (Mobile App) from the client type options when creating a new OAuth client. This will provide the recommended client configuration, and only requires you to provide a name for your new client.
Creating an OAuth client for a mobile app using Simple Auth
If you are building a web app instead, select Simple Auth (Web App). You'll notice that you have to provide both a name and a redirect URI. A redirect URI is required for any web browser-based OAuth flows, and should be set to the URL of the first page of device setup for your product. The redirect will be triggered once a customer is created successfully, and the next step in the process is setting up their device.
Regardless of your app medium, you will receive an OAuth client ID and secret upon successful creation of a client that will look like this:
Your client secret will only be shown once, so be sure to capture it for your records
This client ID and secret will be added to your application in the next step below. Note that clients created using the default Simple Auth configurations will be scoped for customer creation only. This is for security purposes. In Simple Auth, client credentials can be uncovered relatively easily with some basic scraping techniques. Scoping the client will prevent unintended access to your product's devices or data.
Creating OAuth client can still be done directly against the Particle API. For info on this, see reference documentation on creating OAuth clients.
2. Add OAuth Credentials to SDK
For both the mobile & JavaScript SDKs, you will need to add your client credentials to a configuration file. The client application will need the client credentials that you just generated when creating new customers. Without these credentials, calls to
POST /v1/products/:productIdOrSlug/customers will fail.
You will need to add your OAuth credentials to your web or mobile application
If you are creating a mobile application, you will need to include both the client ID and secret in your configuration file. If you are creating a web application, you only should include your client ID.
For instructions on how to add client credentials to your iOS app, please see iOS OAuth client configuration. For Android apps, please see Android OAuth client configuration
3. Create a customer
You have now moved from the one-time configuration steps to a process that will occur for each new customer that uses your web or mobile app. As mentioned earlier in this section, much of what will be discussed in the next four steps will be magically handled by the Particle SDKs, with no custom code needed from you.
After navigating to your application, one of the first things your customer will need to do is create an account. Because you are not running your own web server, the customer will be created in the Particle system. They will provide a username and password in the form, that will serve as their login credentials to the app.
Specifically, the SDK will grab the customer's username and password, and hit the
POST /v1/products/:productIdOrSlug/customers API endpoint, passing along the customer's credentials as well as the OAuth client credentials you added to the config file in the previous step.
The create customer endpoint requires your OAuth client credentials,
and returns an access token for the newly created customer
For a mobile app, the SDK will require both the client ID and the secret to successfully authenticate. For a web app, the endpoint will only pass the client ID.
The
POST customers endpoint both creates the customer as well as logs them in. As a result, an access token will be available to your application after successful customer creation. Remember that it is this access token that will allow the app to do things like claim the device, and interact with it.
4. Create Claim Code & Send to Device
This step actually comprises a lot of things that happen behind the scenes, but has been combined for simplicity and ease of communication. A claim code is what is used to associate a device with a person. In your case, the claim code will associate a device with a customer.
In order for a device to be setup successfully, your application must retrieve a claim code on behalf of the customer setting up their device and send that claim code to the device. When the device receives proper Wi-Fi credentials and is able to connect to the Internet, it sends the claim code to the Particle cloud. The Particle cloud then links the device to the customer, and grants the customer access over that device.
The first thing that must happen is retrieving a claim code from the Particle cloud for the customer. A special endpoint exists for products to use to generate claim codes on behalf of their customers.
This endpoint is
POST /v1/products/:productIdOrSlug/device_claims. The customer's access token is required, and is used to generate a claim code that will allow for the link between the device and the customer.
Once your mobile/web app has a claim code, it then must then send it to the device.
Your app will use the customer access token to generate a device claim code and send it to the device
This happens by connecting the customer's device to the device's Wi-Fi access point. When the photon is in listening mode, it is broadcasting a Wi-Fi network that the customer's computer or phone can connect to.
Note: When programmatically entering listening mode on the Photon, P1 or P0, care should be taken to conserve the memory utilized by user firmware. Listening Mode on these devices utilizes a number of threads to create short-lived HTTP server instances, a TCP server for SoftAP access, and associated resources. If the free memory available on a device at the time Listening Mode is triggered is less than 21.5K, the device will be unable to enter listening mode. In some cases, it may appear as though the device is in listening mode, but any attempt to configure access via the CLI or Particle Mobile App will time out or fail. None of the device's user firmware is lost or affected in either case, but the RAM in use will need to be optimized below 21.5k before re-attempting to enter listening mode.
Once the customer's device is connected to the Particle device's network, your mobile app then will send the claim code generated in the last step to the Particle device.
Again, this will all be part of the boilerplate code of the SDKs, meaning that you will not need to worry much about the nitty-gritty details about how this works.
5. Connect device to Wi-Fi
Now that your app is connected directly to the customer's Particle-powered device, it can provide the device with Wi-Fi credentials to allow it to connect to the Internet.
Your app will send the customer's device Wi-Fi credentials
Through your mobile or web app, your customer will choose from a list of available Wi-Fi networks, and provide a password (if necessary) to be able to connect. The app sends these credentials to the device. Once received, the device resets and uses these credentials to connect to the Internet.
6. Associate device with customer
The device sends the claim code to the cloud, which is used to link the device to the customer
The device uses the Wi-Fi credentials to connect to the Internet, and immediately sends the claim code to the Particle cloud. At that point, the device is considered "claimed" by the customer. This means that any access token generated by that customer can be used to read data from or interact with the device.
7. Interact with Customer's Device
Congratulations! You've now successfully created a customer, gotten the device online, and tied the device to the new customer. You have everything you need to make your product's magic happen.
Use your customer's access token to call functions, check variables, and more!
Your application, armed with the customer's access token, can now successfully authenticate with any device-specific Particle API endpoint, giving it full access and control over the device. Some of the things you can do include:
- Call functions on the device
- Read variable values from the device
- See if the device is online or not
- Much more!
Note that now, your app will never communicate directly to the device. The customer will trigger a call to the Particle API, which then communicates with the device to carry out the desired action.
Further Considerations
Signup and device claiming only will happen one time for each customer. After this has been completed, subsequent visits to your application will continue to use customer access tokens to interact with the device via the Particle API.
If a customer's access token expires, the customer will be asked to log in again, generating a fresh access token to interact with the device.
Two-Legged Authentication
The main difference between two-legged and simple authentication is the presence of a back-end architecture to compliment your mobile or web application. Your application would communicate with both your server as well as the Particle cloud.
The most common reason to use two-legged authentication is the desire to store & manage customer accounts yourself in your own database.
Two-legged authentication involves the presence of your own server
Advantages of Two-Legged
Two-legged authentication is the ideal choice for a product creator looking for maximum visibility, control, and flexibility of their web or mobile application. With two-legged, you gain the ability to implement custom logic, integrations with third-party services, and store application-specific data that are not currently part of the Particle platform.
For example, if you were building a connected hot tub, you could use your own web server and database to allow a customer to set their desired water temperature, then use that piece of data to set the temperature when the hot tub is turned on.
Another advantage of two-legged authentication is beefed-up security. Server-to-server communication (your server to the Particle API) is much more secure than client-to-server communication (your mobile/web application to the Particle API). For sensitive transactions like passing OAuth credentials to get customer access tokens, using your server to talk to the Particle API over HTTPS is safe and protected.
Disadvantages of Two-Legged
Because of the introduction of your own web server, implementing two-legged authentication adds complexity to the architecture of your application and the flow of data. There are simply more pieces of the puzzle that must all fit together.
This will likely result in more development time than choosing Simple Authentication, and can introduce more points of failure for your application.
Two-Legged Implementation
Below is a diagram of the entire setup and authentication flow for the two-legged option. If you choose this authentication method, it is important that you understand the diagram very well. When comparing to the simple auth implementation, you'll notice that many of the steps are similar, with the exception of steps involving interaction with your web server.
Each one of the steps will be covered in detail below. Note that the first two steps are a one-time configuration process, whereas product setup will occur for each new customer that sets up a device.
The full two-legged authorization flow. See here for full size diagram
1. Creating OAuth Client Credentials
Like Simple Authentication, you will need to create valid OAuth client credentials for your product. Unlike simple authentication, your OAuth client credentials will be sent to the Particle API from your server, not directly from your mobile/web application. The client credentials will be used for two purposes:
- Creating new customers
- Creating scoped access tokens for customers
You will create your OAuth client using the Authentication view in your product's Particle console. For info on how to find the Authentication page and create a client, check out the earlier discussion.
For two-legged authentication, you should choose Two-Legged Auth (Server) from the client type options when creating a new OAuth client. This will provide the recommended client configuration, and only requires you to provide a name for your new client.
The recommended client configuration for Two-Legged Authentication
You will receive an OAuth client ID and secret upon successful creation of a client that will look like this:
Your client secret will only be shown once, so be sure to capture it for your records
This client ID and secret will be added to your server in the next step below.
Creating OAuth client can still be done directly against the Particle API. For info on this, see reference documentation on creating OAuth clients. Because the communication is server-to-server, you do not need to specify a scope. Without a scope, your client credentials can be successfully used for both of the purposes listed above.
2. Add OAuth Credentials to your server
Your server will need access to your newly created OAuth client ID and secret. Unlike simple authentication, both client ID and secret will be needed for two-legged authentication. Your server should have access to the client credentials anytime it needs to make an API call to Particle.
You must add your OAuth client credentials to your server
Because of the presence of your server, you should not need to add these credentials to your web or mobile application.
Do not share your client ID and secret publicly. These credentials provide the ability to fully control your product's devices, and access sensitive information about your product. We recommend never publishing the client ID and secret to a GitHub repository.
3. Create a customer
When using two-legged authentication, you will likely be managing customers on your own database (Note: We realize that you may not call end-users of your product "customers" like we do. You may simply refer to them as users internally. However, we will continue calling them customers here to avoid confusion).
An important thing to understand is that even though you will be creating customers yourself, you will also need to create a shadow customer on the Particle cloud. That is, for every customer you create on your back-end, an mirroring customer record must be created using the Particle API.
You will create a Particle shadow customer in addition to creating the customer on your own back-end.
A Particle shadow customer is required to interact with Particle devices when using two-legged auth. Specifically, the customer must exist in the Particle system so that your server can generate access tokens on behalf of the customer. This allows your mobile or web application to interact with the customer's device.
The Particle shadow customer should be created at the exact time that the customer is created in your system. As you will be managing customer credentials on your own server/database, a shadow customer should not have a password when they are created. You will still be able to generate access tokens for the customer using your OAuth client ID and secret instead of passing a username/password for that customer.
The API endpoint to create a customer is
POST /v1/products/:productIdOrSlug/customers. A request to create a customer could look something like:
curl -X POST -u "client-id-goes-here:client-secret-goes-here" -d [email protected] \ -d no_password=true
Note that there is no password for the customer. An email address is the only piece of information required to create a customer in the Particle system, and must be collected by your application during signup. As a result, you must pass the
no_password=true flag to create the customer with no password. Note that in this endpoint, you should use your client ID and secret instead of an access token.
As the diagram above suggests, you will receive an access token in the response of the
POST to creating of the customer. You will use this access token during the device claiming process as well as to interact with the device once it's set up.
Reference docs on creating a customer
If you are using Particle's iOS SDK, there is a hook available to inject the customer's access token into the mobile client. You can learn about this hook here.
Device Setup (Steps 4, 5 & 6)
Now that you have created the customer, and received a valid access token for that customer, it is now time to start the device setup process. This process will occur in exactly the same fashion as with Simple Authentication, and will not involve your server.
It is important to note that the device setup process will not involve your server. All communication will be between your web/mobile application, the customer's device, and the Particle cloud. These steps will also be handled by the mobile & JavaScript SDKs, involving extra work from you.
Because these steps are the same as for Simple Authentication, documentation will not be duplicated here. Instead, you can check out:
Step 4. Create claim code and send to device
Step 5. Connect device to Wi-Fi
Step 6. Associate device with customer
7. Interact with Customer's Device
Once the device has been setup, and the customer created, you're now ready to interact with the device! Hooray! It's important to understand that while you do have your own server, we recommend hitting the Particle API directly from your application client for any device-related actions. This includes:
- Calling functions on the device
- Reading variables on the device
- Listing devices for a customer
This should be straightforward, as using the SDKs will provide helper methods for these actions.
For all direct interactions with the device, hit the Particle API from your application client
The alternative would be telling your application client hit your back-end server, which would then trigger a call to the Particle API. This introduces an extra intermediary in communication, and involves needing to unnecessarily "wrap" Particle API endpoints.
The only reason for your server to hit the Particle API is to generate new scoped access tokens for your customers. You should generate a fresh access token each time the customer logs into your application.
To do this, you will use the
POST /oauth/token endpoint, but in a special way. The request will look like this:
curl -u my-org-client-1234:long-secret -d grant_type=client_credentials \ -d [email protected]
Breaking this down:
- The
-uis a HTTP Basic Auth header, where you will pass your OAuth client ID and secret. This allows you to generate access tokens for customers that belong to your product
grant_typeis the OAuth grant type, which in this case is
client_credentials
scopeis set to customer, and what allows you to specify the customer you'd like an access token for. The value of
customershould be set to the
The response should look like this:
{ "access_token": "254406f79c1999af65a7df4388971354f85cfee9", "token_type": "bearer", "expires_in": 7776000, "refresh_token": "b5b901e8760164e134199bc2c3dd1d228acf2d90" }
The response includes an
access_token for the customer, that should be included for all subsequent API calls for the session. In addition, there's a
refresh_token that you could use to generate a new access token in the event that the token expires. Here's how to use your refresh token to get a new access token:
curl -X POST -u client-id-1234:secret \ -d grant_type=refresh_token -d refresh_token=b5b901e8760164e134199bc2c3dd1d228acf2d90 \
The response will be identical to the new access token creation endpoint above. Refresh tokens can only be used product oAuth tokens. They cannot be used to renew a Particle developer account access token (particle:particle). | https://docs.particle.io/reference/cloud-apis/authentication/ | 2022-06-25T10:22:49 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['/assets/images/four-involved-parties.png',
'four involved authentication parties'], dtype=object)
array(['/assets/images/create-oauth-client.png', 'Create OAuth Client'],
dtype=object)
array(['/assets/images/auth-icon.png', 'Auth Icon'], dtype=object)
array(['/assets/images/create-client.png', 'Create client'], dtype=object)
array(['/assets/images/customer-access-token.png',
'Customer access token'], dtype=object)
array(['/assets/images/simple-auth-high-level.png',
'Simple authentication with Particle'], dtype=object)
array(['/assets/images/simple-auth-visual-vertical.png',
'Simple Auth Flow'], dtype=object)
array(['/assets/images/simple-auth-mobile.png',
'Create Mobile OAuth Client'], dtype=object)
array(['/assets/images/client-created-successfully.png',
'Create Mobile OAuth Client'], dtype=object)
array(['/assets/images/adding-oauth-credentials.png',
'Adding OAuth credentials to your app'], dtype=object)
array(['/assets/images/create_customers.png', 'creating a customer'],
dtype=object)
array(['/assets/images/claim-code-setup.png', 'Claim codes'], dtype=object)
array(['/assets/images/connect-to-wifi-setup.png', 'Connect to Wi-Fi'],
dtype=object)
array(['/assets/images/device-customer-link.png',
'Associating a device to a customer'], dtype=object)
array(['/assets/images/interact-with-device.png',
"Interact with your customer's device"], dtype=object)
array(['/assets/images/two-legged-auth-high-level.png',
'Two legged authentication'], dtype=object)
array(['/assets/images/two-legged-auth-visual-vertical.png',
'Two-legged auth flow'], dtype=object)
array(['/assets/images/create-two-legged-client.png',
'Create Two Legged OAuth Client'], dtype=object)
array(['/assets/images/client-created-successfully.png',
'Create Mobile OAuth Client'], dtype=object)
array(['/assets/images/two-legged-add-oauth-creds.png',
'Adding OAuth credentials to your server'], dtype=object)
array(['/assets/images/create-customer-two-legged.png',
'Creating a customer two-legged authorization'], dtype=object)
array(['/assets/images/two-legged-interact-with-device.png',
'Interact with a device with two-legged auth'], dtype=object)] | docs.particle.io |
Creating and Using Workspaces
After logging into the SingleStoreDB Cloud Portal you can create a workspace by clicking the Create Workspace option. You can also create and manage workspaces through the Management API.
Workspace group names length should be 1-255 characters.
Workspace name length should be 1-32 characters and can consist of digits, lowercase letters, and hyphen (-).
Each workspace provides a connection endpoint which you can use to connect from a client, IDE, or application. You can also use the SQL Editor in the Cloud Portal to connect and run queries directly in your selected workspace.
Managing workspaces can be done through the Cloud Portal User Interface (UI) or by using the Management API / SQL. The following functions are available via these management interfaces.
ATTACH/DETACH DATABASE
CONFIGURE FIREWALL
CONFIGURE USERS
CONNECT
CREATE DATABASE
CREATE WORKSPACE
DELETE WORKSPACE GROUP
TERMINATE WORKSPACE
Executing
DROP DATABASE in a workspace can be only done from the primary R/W database. If the database is detached it can be dropped from the UI or Management API. If this command is run on the primary R/W database it will drop the database from all the workspaces including the read-only copies.
For more information on using workspaces see: | https://docs.singlestore.com/managed-service/en/about-workspaces/creating-and-using-workspaces.html | 2022-06-25T10:32:07 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.singlestore.com |
Authentication Services is an optional configuration. If your organization uses a service for authentication or accounting, you can create a Network Service that specifies the IP address and ports for the service. This is a part of the 802.1x configuration process, which is configured in the profile.
The following figure shows an example configuration.
| https://docs.vmware.com/en/VMware-SD-WAN/4.0/VMware-SD-WAN-by-VeloCloud-Administration-Guide/GUID-035CE44B-75C3-4F65-9CE0-6644500059A0.html | 2022-06-25T12:00:54 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['images/GUID-AC970B77-532D-4DFF-AB80-8D31A3332121-low.png',
'configure-network-services-new-radius-service'], dtype=object)] | docs.vmware.com |
Synapse Developer Guide
This Dev Guide is written by and for Synapse developers.
Note
Synapse as a library is under constant development. It is posssible that content here may become out of date. If you encounter issues with documentation in the Developers guides, please reach out to us on our Synapse Slack chat or file an issue in our projects Github page.
The Dev Guide is a living document and will continue to be updated and expanded as appropriate. The current sections are: | https://synapse.docs.vertex.link/en/latest/synapse/devguide.html | 2022-06-25T11:36:20 | CC-MAIN-2022-27 | 1656103034930.3 | [] | synapse.docs.vertex.link |
# Modyo Content
Modyo Content is an application that makes it possible to create dynamic content repositories called spaces. Within a space, you can create content entries based on types that you define and administrators can establish space access configurations and roles for team members.
Modyo Content has a Headless architecture that allows content to be consumed via an HTTP API from channels defined in Modyo Channels and external systems or applications.
Thanks to its integrated system of cache and management of HTTP Headers, content can also be hosted on CDNs for greater availability and access speed, regardless of geographic location.
# Main functionalities
- Spaces for the organization of contents and teams that manage them.
- Content types to define custom structures.
- Asset manager for organizing files such as images or videos that are used within the contents.
- API and SDKs for access to content repositories, both within and outside of the platform. | https://develop.docs.modyo.com/en/platform/content/ | 2022-06-25T11:18:23 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['/assets/img/content/header.jpg', None], dtype=object)] | develop.docs.modyo.com |
Protocols / BrpTcp / TcpConnectTrialMinDelay Value
This is the minimum time in seconds the reader will wait after an unsuccessful connection request before it starts the next trial.
The actual delay is chosen randomly within the limits of this minimum value and ( TcpConnectTrialMaxDelay ).
Properties
- Value ID: 0x0186/0x28
- Default value: 0060 | https://docs.baltech.de/refman/cfg/protocols/brptcp/tcpconnecttrialmindelay.html | 2022-06-25T10:48:12 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.baltech.de |
This chapter describes XDBC servers and provides procedures for configuring them. The following sections are included:
This chapter describes how to use the Admin Interface to create and configure XDBC servers. For details on how to create and configure XDBC servers programmatically, see Creating and Configuring App Servers in the Scripting Administrative Tasks Guide.
XDBC (XML Database Connector) servers are defined at the group level and are accessible by all hosts within the group. Each XDBC server provides access to a specific forest, and to a library (root) of XQuery programs that reside within a specified directory structure. Applications execute by default against the database that is connected to the XDBC server.
XDBC Servers allow XML Contentbase Connector (XCC) applications to communicate with MarkLogic Server. XCC is an API used to communicate with MarkLogic Server from Java middleware applications. XDBC servers also allow old-style XDBC applications to communicate with MarkLogic Server, although XDBC applications cannot use certain 3.1 and newer features (such as point-in-time queries). Both XCC and XDBC applications use the same wire protocol.
XQuery requests submitted via XCC return results as specified by the XQuery code. These results can include XML and a variety of other data types. It is the XCC application's responsibility to parse, process and interpret these results in a manner appropriate to the variety of data types default to a specific database within MarkLogic Server, but XCC provides the ability to communicate with any database in the MarkLogic Server cluster to which your application connects (and for which you have the necessary permissions and privileges).
XDBC servers follow the MarkLogic Server security model, as do HTTP and WebDAV servers. The server authenticates access to those programs using user IDs and passwords stored in the security database for that XDBC server. (Each XDBC server is connected to a database, and each database is in turn connected to a security database in which security objects such as users are stored.)
Granular access control to the system and to the data is achieved through the use of privileges and permissions. For details on configuring security objects in MarkLogic Server, see Security Administration. For conceptual information on the MarkLogic Server security model, see Security Guide.
Use the following procedures to create and manage XDBC servers:
For the procedure to cancel a running request on an XDBC server, see Canceling a Request. XDBC server root directories named Docs, Data or Admin. These directories are reserved by MarkLogic Server for other purposes. Creating XDBC server root directories with these names can result in unpredictable behavior of the server and may also complicate the software upgrade process.
The port number must not be assigned to any other XDBC, HTTP, or WebDAV server.
A user accessing the XDBC server must have the execute privilege selected in order to access the XDBC server (or be a member of the
admin role).
xdmp:set-request-time-limit. The time limit, in turn, is the maximum number of seconds allowed for servicing a query request. The App Server gives up on queries which take longer, and returns an error.
The new XDBC server is created. Creating an XDBC server is a hot admin task; the changes take effect immediately. For information and setup instructions for managing user sessions and/or keeping track of login attempts, see Managing User Sessions and Monitoring Login Attempts.
For each X an XDBC server, complete the following steps:
To delete the settings for an XDBC server, complete the following steps:
Deleting an XDBC server is a cold admin task; the server restarts to reflect your changes. | https://docs.marklogic.com/9.0/guide/admin/xdbc | 2022-06-25T11:16:54 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.marklogic.com |
8.1 Accessing the solution¶
This section contains important information about the status of the solver and the status of the solution, which must be checked in order to properly interpret the results of the optimization.
8.1.1 Solver termination¶
If an error occurs during optimization then the method
Model.solve will throw an exception of type
OptimizeError. The method
FusionRuntimeException.toString will produce a description of the error, if available. More about exceptions in Sec. 8.2 (Errors and exceptions).
If a runtime error causes the program to crash during optimization, the first debugging step is to enable logging and check the log output. See Sec. 8.3 (Input/Output).
If the optimization completes successfully, the next step is to check the solution status, as explained below.
8.
Moreover, the user may be oblivious to the actual solution type by always referring to
SolutionType.Default, which will automatically select the best available solution, if there is more than one. Moreover, the method
Model.selectedSolution can be used to fix one solution type for all future references.
8.1.3 Problem and solution status¶
Assuming that the optimization terminated without errors, the next important step is to check the problem and solution status. There is one for every type of solution, as explained above.
Problem status
Problem status (
ProblemStatus, retrieved with
Model.getProblemStatus) determines whether the problem is certified as feasible. Its values can roughly be divided into the following broad categories:
feasible — the problem is feasible. For continuous problems and when the solver is run with default parameters, the feasibility status should ideally be
ProblemStatus.PrimalAndDualFeasible. (
SolutionStatus, retrieved with
Model.getPrimalSolutionStatus and
Model.getDualSolutionStatus) provides the information about what the solution values actually contain. The most important broad categories of values are:
optimal (
SolutionStatus.Optimal) — the solution values are feasible and optimal.
certificate — the solution is in fact a certificate of infeasibility (primal or dual, depending on the solution).
unknown/undefined — the solver could not solve the problem or this type of solution is not available for a given problem..
8.1.4 Retrieving solution values¶
After the meaning and quality of the solution (or certificate) have been established, we can query for the actual numerical values. They can be accessed using:
Model.primalObjValue,
Model.dualObjValue— the primal and dual objective value.
Variable.level— solution values for the variables.
Constraint.level— values of the constraint expressions in the current solution.
Constraint.dual,
Variable.dual— dual values.
Remark
By default only optimal solutions are returned. An attempt to access a solution with a weaker status will result in an exception. This can be changed by choosing another level of acceptable solutions with the method
Model.acceptedSolutionStatus. In particular, this method must be called to enable retrieving suboptimal solutions and infeasibility certificates. For instance, one could write
M.acceptedSolutionStatus(AccSolutionStatus.Feasible)
The current setting of acceptable solutions can be checked with
Model.getAcceptedSolutionStatus.
8.1.5 Source code example¶
Below is a source code example with a simple framework for)) | https://docs.mosek.com/latest/pythonfusion/accessing-solution.html | 2022-06-25T10:49:42 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.mosek.com |
Manage baskets
Submit customer information within your payment.
Overview
A basket is a collection of product information for each transaction. It is not a mandatory resource, but some payment methods require this information for risk analysis, such as Unzer Direct Debit Secured and Unzer Invoice Secured. the baskets, see Basket section.
Creating a basket
To create a new basket resource, send a POST request with your desired contents in the JSON format.
Request
$unzer = new UnzerSDK\Unzer('s-priv-xxxxxxxxxx'); $basketItem = (new UnzerSDK\Resources\EmbeddedResources\BasketItem()) ->setBasketItemReferenceId('Artikelnummer4711') ->setQuantity(5) ->setAmountPerUnit(100.1) ->setAmountNet(420.1) ->setTitle('Apple iPhone') ->setSubTitle('Red case') ->setImageUrl('') ->setType(UnzerSDK\Constants\BasketItemTypes::GOODS); $basket = (new UnzerSDK\Resources\Basket()) ->setAmountTotalGross(500.5) ->setCurrencyCode('EUR') ->setOrderId('uniqueOrderId_1') ->addBasketItem($basketItem); $unzer->createBasket($basket);
- Although it’s possible to use any character combination for the
orderId, we recommend using the same
orderIdas for payments (created by
types/authorize,
types/charge, and so on).
- The order ID must be unique.
The response contains a basket ID that is saved in your basket object.
$basket->getId();
Fetching a basket
To fetch a specific
basket resource, use your private key and send a GET request with the basket ID.
Request
$unzer = new UnzerSDK\Unzer('s-priv-xxxxxxxxxx'); $basket = $unzer->fetchBasket('s-bsk-1');
Response
The fetchBasket method will return a
basket Instance containing the Basket information from the PAPI response.
Go to the API reference for more information.
Adding a basket to your transaction
Here you can see how to add a basket to a transaction, in our example a
charge request. You just have to add the basket ID to your
resources parameter list as shown in the following example:
Request
$unzer = new UnzerSDK\Unzer('s-priv-xxxxxxxxxx'); $charge = $unzer->charge( 12.99, 'EUR', 's-crd-9wmri5mdlqps', '', null, null, null, 's-bsk-1' )
Updating a basket
In some cases, it might be necessary to change an already created basket, for example if the user added a voucher or shipping fees changed.
In that case, you have to change the whole content of the basket - it’s not possible to change individual parts of a basket resource. Either copy the contents of the old one, alter them and submit it using the
Unzer::updateBasket method, or create a new Basket.
$unzer = new UnzerSDK\Unzer('s-priv-xxxxxxxxxx'); $basket= $unzer->fetchBasket('s-bsk-1'); $basket->setAmountTotalGross(4321); $basket->setAmountTotalDiscount(5432); $basket->setNote('This basket is updated!'); $basket->getBasketItemByIndex(0)->setTitle('This item is also updated!'); $unzer->updateBasket($basket); | https://docs.unzer.com/server-side-integration/php-sdk-integration/manage-php-resources/php-manage-basket/ | 2022-06-25T10:47:16 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.unzer.com |
- Check that the probes are located in equal conditions and wait until the readings have stabilized (can take 30 minutes or more). If you are close to the probes, do not breathe in their direction.
- Press ADJUST to continue adjusting.
- Choose To same as RHI/II from the MI70 adjustment menu and press SELECT. MI70 automatically recognizes which port the HMP70 series probe is connected to.
- Press YES to confirm the adjustment.
- Turn off the MI70 and detach the connection cable. | https://docs.vaisala.com/r/M211280EN-D/en-US/GUID-5FBC4D9E-3B24-4EBA-8F37-A97B308496F1/GUID-5CE8D9D6-EFE7-48F4-A6C8-79D595A35047 | 2022-06-25T10:54:04 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.vaisala.com |
Welcome to Enso Docs
Our operating system is designed for individuals of all abilities from beginner to expert and this guide is set out to refelct that by providing easy to understand information for all
Our goal
Our goal is to create a simple and usable environment that can run on any machine on earth, no matter how old. There are some great open source projects out there and we aim to incorporate our favourites into our OS
Getting started
First things first
Enso OS is built on GNU/Linux (aka Linux), if you are unfamiliar with Linux you may wish to get some further information, you can find a very detailed - Wikipedia article about Linux which contains far more information than I could provide regarding the matter.
There is already tons of information across the internet regarding the differences of Linux and Windows Googled here that should be able to provide you all the information that you need to know.
If you are looking to install Enso on your laptop but are unsure about whether it is compatible, use these helpful tools provided by Ubuntu here or Debian here to help, Enso is built on Ubuntu so a system that works with Ubuntu should work with Enso
A more technical explenation
Enso OS is a custom build of Xubuntu 16.04.03, which incorporates Xfce over the top of Ubuntu, Enso intergrates Gala WM (from the elementary project) into Xfce to give it some nice animations and further function. It also comes with Panther Launcher (a fork of elementary's Slingshot) with slight tweaks to enable it to work better with Xfce, and Plank the 'simplest dock on the planet' to swtich between applications.
Download
Enso OS is ready to download, get version 0.2.1 now!
In our installation tutorial we will be using Etcher to burn Enso to a USB pen drive, you can find the installation media of this application on their site this application will work on Windows, Mac and Linux machines. You can also use whichever application you see fit.
Donations
Like our work? For our project to contiune we are soley based on your donations, we don't add any spyware on spamware to our software in order to keep it running smothely and to protect your privacy.
Getting involved
If you wish to get involved in this project whether it's from a design / develoment or something else perspective then head over to our Github page
You can also report any issues with our software here | http://docs.enso-os.site/ | 2022-06-25T11:36:12 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.enso-os.site |
What is CampaignChain?¶
CampaignChain is open-source campaign management software to plan, execute and monitor digital marketing campaigns across multiple online communication channels, such as Twitter, Facebook, Google Analytics or third-party CMS, e-commerce and CRM tools.
For marketers, CampaignChain enables marketing managers to have a complete overview of digital campaigns and provides one entry point to multiple communication channels for those who implement campaigns..
User Interface¶
CampaignChain’s Web-based user interface is responsive and works on Desktop computers as well as mobile devices such as Tablets and Smartphones. | https://campaignchain-docs.readthedocs.io/en/1.0.0-beta.1/user/overview.html | 2022-06-25T10:52:44 | CC-MAIN-2022-27 | 1656103034930.3 | [] | campaignchain-docs.readthedocs.io |
Connect the Discover and Command appliances to the Trace appliance
After you deploy the Trace appliance, you must establish a connection from all ExtraHop Discover and Command appliances to the Trace appliance before you can query for packets.
Connected to Discover Appliance
Connected to Discover and Command Appliance
- Log in to the Administration settings.
- If you have a Command appliance, log in to the Administration settings on the Command appliance and repeat steps 3 through 7 for all Trace appliances.
Thank you for your feedback. Can we contact you to ask follow up questions? | https://docs.extrahop.com/8.6/connect-eda-eca-eta/ | 2022-06-25T10:15:04 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['/images/8.6/eda-eta-diagram.png', None], dtype=object)
array(['/images/8.6/eda-eta-eca-diagram.png', None], dtype=object)] | docs.extrahop.com |
openquake.hazardlib.geo package¶
Surface classes¶
- openquake.hazardlib.geo.surface package
- Submodules
- openquake.hazardlib.geo.surface.base module
- openquake.hazardlib.geo.surface.complex_fault module
- openquake.hazardlib.geo.surface.gridded module
- openquake.hazardlib.geo.surface.multi module
- openquake.hazardlib.geo.surface.planar module
- openquake.hazardlib.geo.surface.simple_fault module
- Module contents
geodetic¶
Module
openquake.hazardlib.geo.geodetic contains functions for geodetic
transformations, optimized for massive calculations.
-
- Returns
Azimuth as an angle between direction to north from first point and direction to the second point measured clockwise in decimal degrees.
-_matrix(lons, lats, diameter=12742.0)[source]¶
- Parameters
lons – array of m longitudes
lats – array of m latitudes
- Returns
matrix of (m, m) distances
- openquake.hazardlib.geo.geodetic.distance_to_arc(alon, alat, aazimuth, plons, plats)[source]¶
Calculate a closest distance between a great circle arc and a point (or a collection of points).
- Parameters
alat (float alon,) – Arc reference point longitude and latitude, in decimal degrees.
azimuth – Arc azimuth (an angle between direction to a north and arc in clockwise direction), measured in a reference point, in decimal degrees.
plats (float plons,) – Longitudes and latitudes of points to measure distance. Either scalar values or numpy arrays of decimal degrees.
- Returns
Distance in km, a scalar value or numpy array depending on
plonsand
plats. A distance is negative if the target point lies on the right hand side of the arc., diameter=12742.0)[source]¶
Calculate the geodetic distance between two points or two collections of points.
Parameters are coordinates in decimal degrees. They could be scalar float numbers or numpy arrays, in which case they should “broadcast together”.
Implements
- Returns
Distance in km, floating point scalar or numpy array of such.
-.
- Parameters
depth1 (float lon1, lat1,) – Coordinates of a point to start placing intervals from. The first point in the resulting list has these coordinates.
depth2 (float lon2, lat2,) – Coordinates of the other end of the great circle arc segment to put intervals on. The last resulting point might be closer to the first reference point than the second one or further, since the number of segments is taken as rounded division of length between two reference points and
length.
length – Required distance between two subsequent resulting points, in km.
- Returns
Tuple of three 1d numpy arrays: longitudes, latitudes and depths of resulting points respectively.
Rounds the distance between two reference points with respect to
lengthand calls
npoints_towards().
- openquake.hazardlib.geo.geodetic.min_distance_to_segment(seglons, seglats, lons, lats)[source]¶
This function computes the shortest distance to a segment in a 2D reference system.
- Parameters
seglons – A list or an array of floats specifying the longitude values of the two vertexes delimiting the segment.
seglats – A list or an array of floats specifying the latitude values of the two vertexes delimiting the segment.
lons – A list or a 1D array of floats specifying the longitude values of the points for which the calculation of the shortest distance is requested.
lats – A list or a 1D array of floats specifying the latitude values of the points for which the calculation of the shortest distance is requested.
- Returns
An array of the same shape as lons which contains for each point defined by (lons, lats) the shortest distance to the segment. Distances are negative for those points that stay on the ‘left side’ of the segment direction and whose projection lies within the segment edges. For all other points, distance is positive.
- openquake.hazardlib.geo.geodetic.min_geodetic_distance(a, b)[source]¶
Compute the minimum distance between first mesh and each point of the second mesh when both are defined on the earth surface.
- Parameters
a – a pair of (lons, lats) or an array of cartesian coordinates
b – a pair of (lons, lats) or an array of cartesian coordinates
-.
- Parameters
depth1 (float lon1, lat1,) – Coordinates of a point to start from. The first point in a resulting list has these coordinates.
depth2 (float lon2, lat2,) – Coordinates of a point to finish at. The last point in a resulting list has these coordinates.
npoints – Integer number of points to return. First and last points count, so if there have to be two intervals,
npointsshould be 3.
- Returns
Tuple of three 1d numpy arrays: longitudes, latitudes and depths of resulting points respectively..
- Parameters
depth (float lon, lat,) – Coordinates of a point to start from. The first point in a resulting list has these coordinates.
azimuth – A direction representing a great circle arc together with a reference point.
hdist – Horizontal (geodetic) distance from reference point to the last point of the resulting list, in km.
vdist – Vertical (depth) distance between reference and the last point, in km.
npoints – Integer number of points to return. First and last points count, so if there have to be two intervals,
npointsshould be 3.
- Returns
Tuple of three 1d numpy arrays: longitudes, latitudes and depths of resulting points respectively..
- Parameters
lat (float lon,) – Coordinates of a reference point, in decimal degrees.
azimuth – An azimuth of a great circle arc of interest measured in a reference point in decimal degrees.
distance – Distance to target point in km.
- Returns
Tuple of two float numbers: longitude and latitude of a target point in decimal degrees respectively.
Implements the same approach as
npoints_towards().
- openquake.hazardlib.geo.geodetic.spherical_to_cartesian(lons, lats, depths=None)[source]¶
Return the position vectors (in Cartesian coordinates) of list of spherical coordinates.
For equations see:.
Parameters are components of spherical coordinates in a form of scalars, lists or numpy arrays.
depthscan be
Nonein which case it’s considered zero for all points.
- Returns
numpy.arrayof 3d vectors representing points’ coordinates in Cartesian space in km. The array has shape lons.shape + (3,). In particular, if
lonsand
latsare scalars the result is a 3D vector and if they are vectors the result is a matrix of shape (N, 3).
See also
cartesian_to_spherical().
line¶
Module
openquake.hazardlib.geo.line defines
Line.
- class openquake.hazardlib.geo.line.Line(points)[source]¶
Bases:
object
This class represents a geographical line, which is basically a sequence of geographical points.
A line is defined by at least two points.
- average_azimuth()[source]¶
Calculate and return weighted average azimuth of all line’s segments in decimal degrees. Uses formula from >>> from openquake.hazardlib.geo.point import Point as P >>> ‘%.1f’ % Line([P(0, 0), P(1e-5, 1e-5)]).average_azimuth() ‘45.0’ >>> ‘%.1f’ % Line([P(0, 0), P(0, 1e-5), P(1e-5, 1e-5)]).average_azimuth() ‘45.0’ >>> line = Line([P(0, 0), P(-2e-5, 0), P(-2e-5, 1.154e-5)]) >>> ‘%.1f’ % line.average_azimuth() ‘300.0’
- classmethod from_vectors(lons, lats, deps=None)[source]¶
Creates a line from three :numpy:`numpy.ndarray` instances containing longitude, latitude and depths values
- get_length() float [source]¶
Calculate the length of the line as a sum of lengths of all its segments. :returns:
Total length in km.
- get_lengths() numpy.ndarray [source]¶
Calculate a
numpy.ndarrayinstance with the length of the segments composing the polyline :returns:
Segments length in km.
- get_tu(mesh)[source]¶
Computes the U and T coordinates of the GC2 method for a mesh of points :param mesh:
An instance of
openquake.hazardlib.geo.mesh.Mesh
- get_tu_hat()[source]¶
Return the unit vectors defining the local origin for each segment of the trace. :param sx:
The vector with the x coordinates of the trace
- Parameters
sy – The vector with the y coordinates of the trace
- Returns
Two arrays of size n x 3 (when n is the number of segments composing the trace
- get_ui_ti(mesh, uhat, that)[source]¶
Compute the t and u coordinates. ti and ui have shape (num_segments x num_sites)
- horizontal()[source]¶
Check if this line is horizontal (i.e. all depths of points are equal). :returns bool:
True if this line is horizontal, false otherwise.
- keep_corners(delta)[source]¶
Removes the points where the change in direction is lower than a tolerance value. :param delta:
An angle in decimal degrees
- on_surface()[source]¶
Check if this line is defined on the surface (i.e. all points are on the surfance, depth=0.0).
- Returns bool
True if this line is on the surface, false otherwise.
- resample(section_length)[source]¶
Resample this line into sections. The first point in the resampled line corresponds to the first point in the original line. Starting from the first point in the original line, a line segment is defined as the line connecting the last point in the resampled line and the next point in the original line. The line segment is then split into sections of length equal to
section_length. The resampled line is obtained by concatenating all sections. The number of sections in a line segment is calculated as follows:
round(segment_length / section_length). Note that the resulting line has a length that is an exact multiple of
section_length, therefore its length is in general smaller or greater (depending on the rounding) than the length of the original line. For a straight line, the difference between the resulting length and the original length is at maximum half of the
section_length. For a curved line, the difference my be larger, because of corners getting cut.
- Parameters
section_length (float) – The length of the section, in km.
- Returns
A new line resampled into sections based on the given length.
- Return type
-
- openquake.hazardlib.geo.line.get_average_azimuth(azimuths, distances) float [source]¶
Computes the average azimuth :param azimuths:
A
numpy.ndarrayinstance
- Parameters
distances – A
numpy.ndarrayinstance
- Returns
A float with the mean azimuth in decimal degrees
- openquake.hazardlib.geo.line.get_tu(ui, ti, sl, weights)[source]¶
Compute the T and U quantitities :param ui:
A
numpy.ndarrayinstance of cardinality (num segments x num sites)
- Parameters
ti – A
numpy.ndarrayinstance of cardinality (num segments x num sites)
sl – A
numpy.ndarrayinstance with the segments’ length
weights – A
numpy.ndarrayinstance of cardinality (num segments x num sites)
mesh¶
Module
openquake.hazardlib.geo.mesh defines classes
Mesh and
its subclass
RectangularMesh.
- class openquake.hazardlib.geo.mesh.Mesh(lons, lats, depths=None)[source]¶
Bases:
object
Mesh object represent a collection of points and provides the most efficient way of keeping those collections in memory.
- Parameters
lons – A numpy array of longitudes. Can be 1D or 2D.
lats – Numpy array of latitudes. The array must be of the same shape as
lons.
depths – Either
None, which means that all points the mesh consists of are lying on the earth surface (have zero depth) or numpy array of the same shape as previous two.
Mesh object can also be created from a collection of points, see
from_points_list().
- DIST_TOLERANCE = 0.005¶
Tolerance level to be used in various spatial operations when approximation is required – set to 5 meters.
- classmethod from_coords(coords, sort=True)[source]¶
Create a mesh object from a list of 3D coordinates (by sorting them)
- Params coords
list of coordinates
- Parameters
sort – flag (default True)
- Returns
-
- classmethod from_points_list(points)[source]¶
Create a mesh object from a collection of points.
- get_closest_points(mesh)[source]¶
Find closest point of this mesh for each point in the other mesh
- get_convex_hull()[source]¶
Get a convex polygon object that contains projections of all the points of the mesh.
- Returns
Instance of
openquake.hazardlib.geo.polygon.Polygonthat is a convex hull around all the points in this mesh. If the original mesh had only one point, the resulting polygon has a square shape with a side length of 10 meters. If there were only two points, resulting polygon is a stripe 10 meters wide.
- get_distance_matrix()[source]¶
Compute and return distances between each pairs of points in the mesh.
This method requires that the coordinate arrays are one-dimensional. NB: the depth of the points is ignored
Warning
Because of its quadratic space and time complexity this method is safe to use for meshes of up to several thousand points. For mesh of 10k points it needs ~800 Mb for just the resulting matrix and four times that much for intermediate storage.
- Returns
Two-dimensional numpy array, square matrix of distances. The matrix has zeros on main diagonal and positive distances in kilometers on all other cells. That is, value in cell (3, 5) is the distance between mesh’s points 3 and 5 in km, and it is equal to value in cell (5, 3).
Uses
openquake.hazardlib.geo.geodetic.geodetic_distance().
- get_joyner_boore_distance(mesh)[source]¶
Compute and return Joyner-Boore distance to each point of
mesh. Point’s depth is ignored.
See
openquake.hazardlib.geo.surface.base.BaseSurface.get_joyner_boore_distance()for definition of this distance.
- Returns
numpy array of distances in km of the same shape as
mesh. Distance value is considered to be zero if a point lies inside the polygon enveloping the projection of the mesh or on one of its edges.
- get_min_distance(mesh)[source]¶
Compute and return the minimum distance from the mesh to each point in another mesh.
- Returns
numpy array of distances in km of shape (self.size, mesh.size)
Method doesn’t make any assumptions on arrangement of the points in either mesh and instead calculates the distance from each point of this mesh to each point of the target mesh and returns the lowest found for each.
- property shape¶
Return the shape of this mesh.
- Returns tuple
The shape of this mesh as (rows, columns)
- class openquake.hazardlib.geo.mesh.RectangularMesh(lons, lats, depths=None)[source]¶
Bases:
openquake.hazardlib.geo.mesh.Mesh
A specification of
Meshthat requires coordinate numpy-arrays to be two-dimensional.
Rectangular mesh is meant to represent not just an unordered collection of points but rather a sort of table of points, where index of the point in a mesh is related to it’s position with respect to neighbouring points.
- classmethod from_points_list(points)[source]¶
Create a rectangular mesh object from a list of lists of points. Lists in a list are supposed to have the same length.
- get_cell_dimensions()[source]¶
Calculate centroid, width, length and area of each mesh cell.
- Returns
Tuple of four elements, each being 2d numpy array. Each array has both dimensions less by one the dimensions of the mesh, since they represent cells, not vertices. Arrays contain the following cell information:
centroids, 3d vectors in a Cartesian space,
length (size along row of points) in km,
width (size along column of points) in km,
area in square km.
- get_mean_inclination_and_azimuth()[source]¶
Calculate weighted average inclination and azimuth of the mesh surface.
- Returns
Tuple of two float numbers: inclination angle in a range [0, 90] and azimuth in range [0, 360) (in decimal degrees).
The mesh is triangulated, the inclination and azimuth for each triangle is computed and average values weighted on each triangle’s area are calculated. Azimuth is always defined in a way that inclination angle doesn’t exceed 90 degree.
- get_mean_width()[source]¶
Calculate and return (weighted) mean width (km) of a mesh surface.
The length of each mesh column is computed (summing up the cell widths in a same column), and the mean value (weighted by the mean cell length in each column) is returned.
- get_middle_point()[source]¶
Return the middle point of the mesh.
- Returns
-
The middle point is taken from the middle row and a middle column of the mesh if there are odd number of both. Otherwise the geometric mean point of two or four middle points.
- triangulate()[source]¶
Convert mesh points to vectors in Cartesian space.
- Returns
Tuple of four elements, each being 2d numpy array of 3d vectors (the same structure and shape as the mesh itself). Those arrays are:
points vectors,
vectors directed from each point (excluding the last column) to the next one in a same row →,
vectors directed from each point (excluding the first row) to the previous one in a same column ↑,
vectors pointing from a bottom left point of each mesh cell to top right one ↗.
So the last three arrays of vectors allow to construct triangles covering the whole mesh.
nodalplane¶
Module
openquake.hazardlib.geo.nodalplane implements
NodalPlane.
- class openquake.hazardlib.geo.nodalplane.NP(strike, dip, rake)¶
Bases:
tuple
- class openquake.hazardlib.geo.nodalplane.NodalPlane(strike, dip, rake)[source]¶
Bases:
object
Nodal plane represents earthquake rupture orientation and propagation direction.
- Parameters
strike – Angle between line created by the intersection of rupture plane and the North direction (defined between 0 and 360 degrees).
dip – Angle between earth surface and fault plane (defined between 0 and 90 degrees).
rake – Angle describing rupture propagation direction (defined between -180 and +180 degrees).
- Raises
ValueError – If any of parameters exceeds the definition range.
- classmethod check_dip(dip)[source]¶
Check if
dipis in range
(0, 90]and raise
ValueErrorotherwise.
- classmethod check_rake(rake)[source]¶
Check if
rakeis in range
(-180, 180]and raise
ValueErrorotherwise.
point¶
Module
openquake.hazardlib.geo.point defines
Point.
- class openquake.hazardlib.geo.point.Point(longitude, latitude, depth=0.0)[source]¶
Bases:
object
This class represents a geographical point in terms of longitude, latitude, and depth (with respect to the Earth surface).
- Parameters
longitude (float) – Point longitude, in decimal degrees.
latitude (float) – Point latitude, in decimal degrees.
depth (float) – Point depth (default to 0.0), in km. Depth > 0 indicates a point below the earth surface, and depth < 0 above the earth surface.
- azimuth(point)[source]¶
Compute the azimuth (in decimal degrees) between this point and the given point.
- closer_than(mesh, radius)[source]¶
Check for proximity of points in the
mesh.
- Parameters
mesh –
openquake.hazardlib.geo.mesh.Meshinstance.
radius – Proximity measure in km.
- Returns
Numpy array of boolean values in the same shape as the mesh coordinate arrays with
Trueon indexes of points that are not further than
radiuskm from this point. Function
distance()is used to calculate distances to points of the mesh. Points of the mesh that lie exactly
radiuskm away from this point also have
Truein their indices.
- distance(point)[source]¶
Compute the distance (in km) between this point and the given point.
Distance is calculated using pythagoras theorem, where the hypotenuse is the distance and the other two sides are the horizontal distance (great circle distance) and vertical distance (depth difference between the two locations).
- distance_to_mesh(mesh, with_depths=True)[source]¶
Compute distance (in km) between this point and each point of
mesh.
- Parameters
mesh –
Meshof points to calculate distance to.
with_depths – If
True(by default), distance is calculated between actual point and the mesh, geodetic distance of projections is combined with vertical distance (difference of depths). If this is set to
False, only geodetic distance between projections is calculated.
- Returns
Numpy array of floats of the same shape as
meshwith distance values in km in respective indices.
- equally_spaced_points(point, distance)[source]¶
Compute the set of points equally spaced between this point and the given point.
- classmethod from_vector(vector)[source]¶
Create a point object from a 3d vector in Cartesian space.
- on_surface()[source]¶
Check if this point is defined on the surface (depth is 0.0).
- Returns bool
True if this point is on the surface, false otherwise.
- point_at(horizontal_distance, vertical_increment, azimuth)[source]¶
Compute the point with given horizontal, vertical distances and azimuth from this point.
- Parameters
horizontal_distance (float) – Horizontal distance, in km.
vertical_increment (float) – Vertical increment, in km. When positive, the new point has a greater depth. When negative, the new point has a smaller depth.
- Returns
The point at the given distances.
- Return type
-
- to_polygon(radius)[source]¶
Create a circular polygon with specified radius centered in the point.
- property wkt2d¶
Generate WKT (Well-Known Text) to represent this point in 2 dimensions (ignoring depth).
polygon¶
Module
openquake.hazardlib.geo.polygon defines
Polygon.
- class openquake.hazardlib.geo.polygon.Polygon(points)[source]¶
Bases:
object
Polygon objects represent an area on the Earth surface.
- Parameters
points – The list of
Pointobjects defining the polygon vertices. The points are connected by great circle arcs in order of appearance. Polygon segment should not cross another polygon segment. At least three points must be defined.
- Raises
ValueError – If
pointscontains less than three unique points or if polygon perimeter intersects itself.
- dilate(dilation)[source]¶
Extend the polygon to a specified buffer distance.
Note
In extreme cases where dilation of a polygon creates holes, thus resulting in a multi-polygon, we discard the holes and simply return the ‘exterior’ of the shape.
- discretize(mesh_spacing)[source]¶
Get a mesh of uniformly spaced points inside the polygon area with distance of
mesh_spacingkm between.
- classmethod from_wkt(wkt_string)[source]¶
Create a polygon object from a WKT (Well-Known Text) string.
- Parameters
wkt_string – A standard WKT polygon string.
- Returns
-
- intersects(mesh)[source]¶
Check for intersection with each point of the
mesh.
Mesh coordinate values are in decimal degrees.
- Parameters
mesh –
openquake.hazardlib.geo.mesh.Meshinstance.
- Returns
Numpy array of bool values in the same shapes in the input coordinate arrays with
Trueon indexes of points that lie inside the polygon or on one of its edges and
Falsefor points that neither lie inside nor touch the boundary.
- openquake.hazardlib.geo.polygon.UPSAMPLING_STEP_KM = 100¶
Polygon upsampling step for long edges, in kilometers. See
get_resampled_coordinates().
- openquake.hazardlib.geo.polygon.get_resampled_coordinates(lons, lats)[source]¶
Resample polygon line segments and return the coordinates of the new vertices. This limits distortions when projecting a polygon onto a spherical surface.
Parameters define longitudes and latitudes of a point collection in the form of lists or numpy arrays.
- Returns
A tuple of two numpy arrays: longitudes and latitudes of resampled vertices.
utils¶
Module
openquake.hazardlib.geo.utils contains functions that are common
to several geographical primitives and some other low-level spatial operations.
- class openquake.hazardlib.geo.utils.OrthographicProjection(west, east, north, south)[source]¶
Bases:
object
Callable OrthographicProjection object that can perform both forward and reverse projection (converting from longitudes and latitudes to x and y values on 2d-space and vice versa). The call takes three arguments: first two are numpy arrays of longitudes and latitudes or abscissae and ordinates of points to project and the third one is a boolean that allows to choose what operation is requested – is it forward or reverse one.
Truevalue given to third positional argument (or keyword argument “reverse”) indicates that the projection of points in 2d space back to earth surface is needed. The default value for “reverse” argument is
False, which means forward projection (degrees to kilometers).
Raises
ValueErrorin forward projection mode if any of the target points is further than 90 degree (along the great circle arc) from the projection center.).
- class openquake.hazardlib.geo.utils.PolygonPlotter(ax)[source]¶
Bases:
object
Add polygons to a given axis object
- exception openquake.hazardlib.geo.utils.SiteAssociationError[source]¶
Bases:
Exception
Raised when there are no sites close enough
- class openquake.hazardlib.geo.utils.SphericalBB(west, east, north, south)¶
Bases:
tuple
- openquake.hazardlib.geo.utils.angular_distance(km, lat=0, lat2=None)[source]¶
Return the angular distance of two points at the given latitude.
>>> '%.3f' % angular_distance(100, lat=40) '1.174' >>> '%.3f' % angular_distance(100, lat=80) '5.179'
- openquake.hazardlib.geo.utils.assoc(objects, sitecol, assoc_dist, mode)[source]¶
Associate geographic objects to a site collection.
- Parameters
objects – something with .lons, .lats or [‘lon’] [‘lat’], or a list of lists of objects with a .location attribute (i.e. assets_by_site)
assoc_dist – the maximum distance for association
mode – if ‘strict’ fail if at least one site is not associated if ‘error’ fail if all sites are not associated
- Returns
(filtered site collection, filtered objects)
- openquake.hazardlib.geo.utils.assoc_to_polygons(polygons, data, sitecol, mode)[source]¶
Associate data from a shapefile with polygons to a site collection :param polygons: polygon shape data :param data: rest of the data belonging to the shapes :param sitecol: a (filtered) site collection :param mode: ‘strict’, ‘warn’ or ‘filter’ :returns: filtered site collection, filtered objects, discarded
- openquake.hazardlib.geo.utils.bbox2poly(bbox)[source]¶
- Parameters
bbox – a geographic bounding box West-East-North-South
- Returns
a list of pairs corrisponding to the bbox polygon
- openquake.hazardlib.geo.utils.cartesian_to_spherical(vectors)[source]¶
Return the spherical coordinates for coordinates in Cartesian space.
This function does an opposite to
spherical_to_cartesian().
- Parameters
vectors – Array of 3d vectors in Cartesian space of shape (…, 3)
- Returns
Tuple of three arrays of the same shape as
vectorsrepresenting longitude (decimal degrees), latitude (decimal degrees) and depth (km) in specified order.
- openquake.hazardlib.geo.utils.check_extent(lons, lats, msg='')[source]¶
- Parameters
lons – an array of longitudes (more than one)
lats – an array of latitudes (more than one)
- Params msg
message to display in case of too large extent
- Returns
(dx, dy, dz) in km (rounded)
- openquake.hazardlib.geo.utils.clean_points(points)[source]¶
Given a list of
Pointobjects, return a new list with adjacent duplicate points removed.
- openquake.hazardlib.geo.utils.cross_idl(lon1, lon2, *lons).fix_lon(lon)[source]¶
- Returns
a valid longitude in the range -180 <= lon < 180
>>> fix_lon(11) 11 >>> fix_lon(181) -179 >>> fix_lon(-182) 178
- openquake.hazardlib.geo.utils.geohash(lon, lat, length)[source]¶
Encode a position given in lon, lat into a geohash of the given lenght
>>> geohash(lon=10, lat=45, length=5) b'spzpg'
- openquake.hazardlib.geo.utils.get_bounding_box(obj, maxdist)[source]¶
Return the dilated bounding box of a geometric object.
- Parameters
obj – an object with method .get_bounding_box, or with an attribute .polygon or a list of locations
maxdist – maximum distance in km
- openquake.hazardlib.geo.utils.get_longitudinal_extent(lon1, lon2)[source]¶
Return the distance between two longitude values as an angular measure. Parameters represent two longitude values in degrees.
- Returns
Float, the angle between
lon1and
lon2in degrees. Value is positive if
lon2is on the east from
lon1and negative otherwise. Absolute value of the result doesn’t exceed 180 for valid parameters values.
- openquake.hazardlib.geo.utils.get_middle_point(lon1, lat1, lon2, lat2)[source]¶
Given two points return the point exactly in the middle lying on the same great circle arc.
Parameters are point coordinates in degrees.
- Returns
Tuple of longitude and latitude of the point in the middle.
-.
- Returns
A tuple of four items. These items represent western, eastern, northern and southern borders of the bounding box respectively. Values are floats in decimal degrees.
- Raises
ValueError – If points collection has the longitudinal extent of more than 180 degrees (it is impossible to define a single hemisphere bound to poles that would contain the whole collection).
-).
- Parameters
closed_shape – If
Truethe line will be checked twice: first time with its original shape and second time with the points sequence being shifted by one point (the last point becomes first, the first turns second and so on). This is useful for checking that the sequence of points defines a valid
Polygon.
- openquake.hazardlib.geo.utils.normalized(vector)[source]¶
Get unit vector for a given one.
- Parameters
vector – Numpy vector as coordinates in Cartesian space, or an array of such.
- Returns
Numpy array of the same shape and structure where all vectors are normalized. That is, each coordinate component is divided by its vector’s length.
- openquake.hazardlib.geo.utils.plane_fit(points)[source]¶
This fits an n-dimensional plane to a set of points. See
- Parameters
points – An instance of :class:~numpy.ndarray. The number of columns must be equal to three.
- Returns
A point on the plane and the normal to the plane.
- openquake.hazardlib.geo.utils.point_to_polygon_distance(polygon, pxx, pyy)[source]¶
Calculate the distance to polygon for each point of the collection on the 2d Cartesian plane.
- Parameters
polygon – Shapely “Polygon” geometry object.
pxx – List or numpy array of abscissae values of points to calculate the distance from.
pyy – Same structure as
pxx, but with ordinate values.
- Returns
Numpy array of distances in units of coordinate system. Points that lie inside the polygon have zero distance.
- openquake.hazardlib.geo.utils.triangle_area(e1, e2, e3)[source]¶
Get the area of triangle formed by three vectors.
Parameters are three three-dimensional numpy arrays representing vectors of triangle’s edges in Cartesian space.
- Returns
Float number, the area of the triangle in squared units of coordinates, or numpy array of shape of edges with one dimension less.
Uses Heron formula, see.
Module contents¶
Package
openquake.hazardlib.geo contains implementations of different
geographical primitives, such as
Point,
Line
Polygon and
Mesh, as well as different
surface implementations and utility
class
NodalPlane. | https://docs.openquake.org/oq-engine/3.14/openquake.hazardlib.geo.html | 2022-06-25T10:55:31 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.openquake.org |
Directory Config Source
Configuration
Config Directory can be configured by using Admin Console or Asadmin commands.
Using the Admin Console
To configure the Config Directory in the Admin Console, go to Configuration → [instance-configuration (like server-config)] → MicroProfile → Config → Directory:
Using Asadmin Commands
set-config-dir
Unresolved include directive in modules/ROOT/pages/Technical Documentation/MicroProfile/Config/Directory.adoc - include::partial$tech-preview.adoc[]
- Usage
asadmin> set-config-dir --directory=<full.path.to.dir> --target=<target[default:server]>
- Aim
Sets the directory to be used for the directory config source.
Leaf directory cannot start with a dot, rendering
/home/payara/.secretan invalid path ("." means hidden on a POSIX filesystem).
If relative, will lookup beneath server instance location (usually to be found at
<PAYARA-HOME>/glassfish/domains/<DOMAIN>/)
Defaults to
secrets, targeting
<PAYARA-HOME>/glassfish/domains/<DOMAIN>/secrets/
get-config-dir
Unresolved include directive in modules/ROOT/pages/Technical Documentation/MicroProfile/Config/Directory.adoc - include::partial$tech-preview.adoc[]
- Usage
asadmin> get-config-dir --target=<target[default:server]>
- Aim
Gets the value of the directory to be used for the directory config source.
Using Preboot or Postboot Scripts
When running a Payara Platform distribution in a container, there is no way to restart a running instance after configuring via an
asadmin command without losing the changes to the domain configuration.
Thus you should add the corresponding command to a boot script as described at Pre- and Postboot scripts.
See Payara Server Docker Image environment variables for details how to use this within non-micro containers.
Usage
Usually this config source is used to map secrets mounted by Kubernetes or Docker to a directory inside a running container.
Once you configured a directory to read a config (secrets) from, you need to make sure file names correspond to properties in your codebase.
Map property names to flat file hierarchy
Say you have two properties
property1and
foo.bar.property2.
Payara is configured with secret directory
/home/payara/secrets
Secrets mounted as files to
/home/payara/secrets/property1and
/home/payara/secrets/foo.bar.property2will be read.
Map property names to a sub-directory structure
Say you have two properties
property1and
foo.bar.property2.
Payara is configured with secret directory
/home/payara/secrets
Secrets mounted as files to
/home/payara/secrets/property1and
/home/payara/secrets/foo/bar/property2will be read.
Restrictions on the files content are the same as with the flat hierarchy.
Updates to files and subdirectories are picked up at runtime. Retrieving the config property again will use the updated values. This allows for clearing a value by removing the file, too.
Dots usage and depicting directories and file name
Dots in property names are used to reflect scopes, for example to distinguish different applications, modules, etc.
Any dots in the property name may correspond to changing from one directory to a subdirectory. Example:
foo.bar.testcould be a file
testin
foo/bar/path.
You may combine any number of dot-separated "components" into directories and file name. Example:
foo.bar.test.examplemay be a file
test.examplein
foo/bar/path or a file
examplein path
foo/bar.test/and so on.
Do not use a file extension, as it would be taken as part of the property name.
The longest, most specific match "wins" for reading the value into the property. This allows to create scoped directory structures as you see fit. Example:
foo.bar/test.exampleis less specific than
foo/bar/test.exampleand so on.
You cannot use directories or files whose names start with a dot. They will be ignored, following the POSIX philosophy of hidden files and folders.
Symbolic links will be followed, so you can expose files from such hidden areas, allowing for all types of mangling with names etc. Don’t link to directories, as the file monitors rely on real directories.
Kubernetes Example
You want to retrieve a (secret) value via property
foo.bar.property1:
@ConfigProperty("foo.bar.property1")
You deployed a secret to your Kubernetes cluster:
apiVersion: v1 kind: Secret metadata: name: foobar type: Opaque stringData: property1: my-super-secret-value
And your pod mounts it at
/home/payara/secrets/foo/bar (only showing the relevant parts from the
Deployment K8s YAML):
volumeMounts: - name: test-secrets mountPath: /home/payara/secrets/foo/bar volumes: - name: test-secret secret: secretName: foobar
/ # ls -la /home/payara/secrets/foo/bar total 3 drwxrwxrwt 3 root root 120 Nov 25 10:51 . drwxr-xr-x 3 root root 4096 Nov 25 10:51 .. drwxr-xr-x 2 root root 80 Nov 25 10:51 ..2020_11_25_10_51_55.283009570 lrwxrwxrwx 1 root root 31 Nov 25 10:51 ..data -> ..2020_11_25_10_51_55.283009570 lrwxrwxrwx 1 root root 15 Nov 25 10:51 property1 -> ..data/property1
The server instance will pick up the file and read its content as a value for property
foo.bar.property1 | https://docs.payara.fish/enterprise/docs/Technical%20Documentation/MicroProfile/Config/Directory.html | 2022-06-25T11:42:13 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['../../../_images/microprofile/config-property.png',
'Set Config Property'], dtype=object) ] | docs.payara.fish |
This policy is intended to provide exclusive access to operational devices for certain privileged Principals (users and groups). If a Principal is assigned permissions at the device level against a concrete device, the Principal (referred to as the primary Principal) automatically claims exclusive access to the device. This means that other Principals are not authorized to access this device, under any circumstances.
To assign a different set of permissions than the primary Principal, the notion of an abstract Principal called Others is introduced. The Others Principal is used to represent the rest of the Principals that are not the primary Principal, and can be used by the administrator to assign permissions (typically less effective privileges) on the device.
After a Principal is assigned explicit permissions on concrete devices, the permission enforcement does not fall back to the network or system. | https://docs.vmware.com/en/VMware-Smart-Assurance/10.1.1/ncm-security-configuration-guide-10.1.1/GUID-217822E5-F633-4AD7-A3B4-829486A0E031.html | 2022-06-25T10:55:30 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.vmware.com |
Previously, Network Configuration Manager handled only devices with single configuration file states.
With the multi-config feature implementation, the system now supports devices with multiple configuration files by revisioning all config files defined in a device package -- for a device class .
Multi-Config offers you a known view of any device at any point-in-time!
With Multi-Configuration, Network Configuration Manager can ensure:
Cisco devices can be restored if they have a second configuration file.
Devices with multiple configurations are supported.
Firewalls are supported, along with new communication methods.
Compliance can be enforced on more than one configuration file on a device
Within configuration management, generally devices have 1 -n configuration units.
Following are the "families" of devices, based on multiple configuration units , that are supported:
Alcatel
Checkpoint
IOS Config
Juniper
Cisco Firewall Switch Module
Cisco VPN Concentrator
Passport MSS
Nortel ARN
Device Package
A device family (or class) is represented by a device package. A device package is a collection of driver code and class metadata that is used to manage all devices of that class. Device packages are field deployable.
The Device Package:
Declares the configuration files that are to be captured from the device
Declares the potential destinations and operations supported for the files
Ties every file with a choice of supported destinations and push mechanisms
Ties every file with a choice of activation methods and post operations. For example, pushToMemory, copyToStart.
Declares device level operations. For example, reboot.
Includes defaults that should be used for autonomous operations
Configuration Unit
A configuration unit is a logical set of information that describes or controls the behavior of a device in the context of a network and its relationship with other devices. A configuration unit may be available as "read-only" data, in which case, it may contribute to the state of a device but it is not available for modification.
Device States
When all the configuration units (as declared in the device package) are captured without any errors, the device state is Completed. Rollback and Restore can be performed from this completed state.
When a certain configuration unit (although declared in the device package) is not captured due to errors or intermediate pulls, the state of the device is Partial. You can complete a rollback or a restore from a single configuration unit in the partial state.
On the other hand, configuration units are available for modification that Network Administrators often modify to control the higher level services offered on that network. | https://docs.vmware.com/en/VMware-Smart-Assurance/10.1.8/ncm-online-help-1018/GUID-032353BD-8720-402D-AC62-183335C0E0B8.html | 2022-06-25T10:26:28 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.vmware.com |
The Schedule Manager provides a view into all jobs scheduled , and their job status within Network Configuration Manager.
When any changes have been made to the Schedule Manager, such as Jobs Added, Jobs Changed, or Jobs Removed, the Schedule Manager view is automatically refreshed (updated) to reflect these change.
You can access the Schedule Manger from the Tools option in the menu bar.
The Scheduler Manager has separate sections detailing information pertaining to various jobs. These include the Job View and the Recurring Series . The Recurring Series tab displays primary information for recurring jobs.
Within the Job View tab contents, the first section of the Schedule Manager details each job's information. The various columns give you specific job information.
To view even more job information, right-click on any column heading to see the available column options in the Select Display Columns window. Move column headings between the Available Items and Selected Items to view more or less job information displayed within the Schedule Manager.
After making your selections, click OK.Note: If you have previously defined filters, use the Quick Filter drop-down arrow to see the selections of filters, then make a selection from the list. To designate a new filter, click the Apply icon, and then define your filter criteria.
This Status tab displays the current status of each job. Jobs fall into the following categories:
Note: The tool bars displayed in Job View and Recurring Series include some different icons. To review the icons in the Recurring Series that differ from those in the Job View see: Recurring Series.
Completed
Approved
Rejected
Running
Failed
Pending
Completed/Warning
Cancelled
Partially Completed
Hold
Once a job displays in the Schedule Manager, the following tasks can be completed:
Within the Job View tab, view more information by clicking the Details
icon. The bottom section of the Schedule Manager displays the details of a job, including the General and Task information for the job selected from the list.
Schedule Manager tool bar options
| https://docs.vmware.com/en/VMware-Smart-Assurance/10.1.8/ncm-online-help-1018/GUID-B9784635-6A28-4EB5-A556-4338D56FA9A8.html | 2022-06-25T12:10:57 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['images/GUID-8494EAF0-A49D-478E-9A27-D38E8C077AD4-low.png', None],
dtype=object)
array(['images/GUID-F272F733-68FD-46F7-BC04-5CA7CB8A742E-low.png', None],
dtype=object)
array(['images/GUID-115F1220-F9E0-439A-8F59-CFAAFFEAC395-low.png', None],
dtype=object) ] | docs.vmware.com |
Hau to do things with Words
Скачать
277.26 Kb.
Название
Hau to do things with Words
страница
2
Part 1: How to Free your Software
A four step approach
There are several steps to freeing software.
Step 1: Get computer, write software. The first step is the hardest: it requires an extensive knowledge of the world of computer operating systems, the functioning of computers, the various possible programming languages, networks, protocols, development software – and, most importantly, a zen-like attitude towards the proper placement of special characters like parentheses or hash marks. It requires no math, no physics, and to write it you do not have to be “good with machines”. Nonetheless, the first step might take a few years.
Step 2: Make your “source code” freely available to anyone. “Source code” is a shorthand for the “human readable version” of a piece of software – your definition of human may vary. Source code, with all of its human-readable instructions, variables, parameters, comments, and carefully placed curly brackets is processed by a compiler which turns it into “object code”: a binary, executable program that is specific to the architecture of the chip and the machine that will run it. This is an adequate explanation, though it is important to note that the distinction between source code and object code is not firm.
2
Likewise, the term “freely available to anyone” is flexible. In this particular context it means that Free Software is anonymously downloadable from the internet or available for a small fee on diskette, CD-rom, or any other medium. In a perhaps more trivial sense, freely available also means “not kept secret”—secret software cannot be Free.
Step 3: Copyright your source code. Assuming that you can get your code to work – which is not trivial – the next step in creating Free Software is to copyright it.
3
In the world of software production there is no more powerful institution than intellectual property, and it is arguably as important to Free Software as it is to proprietary software (Coombe, 1998; Boyle, 1996). Copyrighting the source code is a necessary, but not a sufficient condition for software to be Free.
Step 3a: Pause to consider the allocation of functions among patent, copyright, and trademark for a moment. Patents are generally understood as the protection of the “idea” of a technology; when patenting software, applicants generally avoid submitting the
actual complete
source code in the patent application, offering instead a
representation
of the idea which the code expresses.
4
Copyright is more straightforward, and consists of asserting a property right over an original text by simply marking it with a ©. Thus when one copyrights software, one asserts rights to the
actual
technology, not to a
representation
of its idea. As with a novel, copyright covers the actual distribution and order of text on the pages – and sometimes extends to something less exact, as in the case of Apple's Graphical User Interface.
5
A different version of that “idea” can be copyrighted in its own right, just as a rewriting of
Macbeth
can. Trademark, finally, is an even stranger beast, intended to protect the authenticity of a work. Since the nineteen-eighties – when it became customary to add the value of a brand identity to a corporate balance sheet,
6
trademark has ceased to act as a failsafe against “consumer confusion” and has become a tool for the protection of assets.
Step 4: Add some comment code. Comment code is not source code; when a user compiles a program, the compiler compiles the source code and ignores the comment code. Some people—for example, computer science professors teaching undergraduates—insist that comment code is
essential
because it is the medium by which one explains (for example in English) to another human what the software should accomplish. Comment code can be just as opaque as “real” source code, but very few people would argue that comment code is technically necessary. Source code lacking comment code will still technically work, but everything depends on your definition of technical—the machine may understand it, but the human may not.
7
In the case of Free Software, however, the particular piece of comment code to be added is anything but non-technical: it is a legally binding contract license which allows the user to do a specified set of things with the source code.
8
There are many variations of this license, but they all derive from an ur-license written by the Free Software Foundation called the General Public License or GPL . This license says: copy and distribute this code as much as you like, but only under the condition that you re-release anything you make with it or derive from it with the above copyright and contract attached. Software licenses are exceedingly common today. Almost all proprietary software includes a license called an End-User License Agreement (EULA) known in the legal profession as a "click-wrap” or “shrink-wrap" license. These licenses are agreed to merely by installing or using the software. Most EULAs govern what a user can or cannot do with a piece of software. Copying, modification, transfer without license, or installation on more than one machine are often expressly prohibited. The GPL functions the same way, but it grants the user the opposite rights: rights to modify, distribute, change or install on as many machines as needed. GPLs are not signed by using the software, they are only activated when the software is re-distributed (i.e. copied and offered to someone else either freely or for a price).
Your software is now Free. The process is commonly called “copy-lefting” the code
9
.
Legal Hacking
It is only the combination of copyright and contract law in this peculiar and clever manner that allows software to be free. Free Software, as it originated in with the Free Software Foundation, is explicitly opposed to use of intellectual property rights to keep software source code from circulating. It therefore uses contracts like the GPL to guarantee that the holders of the intellectual property rights (such as, for instance, The Free Software Foundation, which holds a large number of the copyrights on existing Free Software) enter into an equal agreement with the subsequent user or purchaser of the software. Some explanation of both of these legal regimes will clarify this situation.
On the one hand, intellectual property law organizes one entity’s rights over a particular thing, vis-à-vis any other (potential) person. As is clear from debates in legal theory and practice, intellectual property is not just a conceit built on the supposedly obvious notion of exclusively possessing tangible things. As Horowitz (1992), Sklar (1988), and Commons (1968) variously argue, property in North Atlantic Law is about defining the allocation and relative priority of a “bundle of rights.” The legal structure that organizes the allocation of these rights should not be confused with the evaluation of the objects themselves, which requires particular institutions such as the United States Patent and Trademark Office. All too often, the fact that something is patented or copyrighted is taken to imply that it is useful, non-obvious, accurate, workable, efficient, or even true. While these criteria may be important for the decision to grant a patent, the patent itself only makes the object property; it grants the designated inventor a limited monopoly on the sale of that item. Therefore, information and land, in this sense, cannot be usefully distinguished with respect to tangibility: both are simply useful legal fictions. Though we may be tempted to ask “how did information become property?” the question might be more usefully phrased: “how did property become information?”
On the other hand, contract law governs two separate persons (individual or corporate) rights to a third thing or person.
10
Like property law it concerns the allocation of rights, governing conflicting claims by rival individuals to a third thing or person. However, it is activated only in the case of a violation of the terms of the contract. Contracts are definitions of rules for the parties involved, for the duration of an agreement. Only when such rules are violated must some higher authority (the court, for example) step in as adjudicator.
In the case of Free Software, whatever a user decides to do with the source code (compile it into a binary executable, change it, add to it, etc.), the contract guarantees that what she has been given can be used for any purpose, and furthermore, that it will
generate further giving,
by requiring her and each subsequent user to agree to the same set of requirements. The contract assures that the subsequently modified code cannot be re-appropriated by anyone, even the original copyright holder (unless the contract is ruled invalid). Hence, anyone can take, give, see, learn from, install, use, and
modify
a copy-lefted piece of software.
11
As various people have observed (see especially Lessig, 2000), this very clever use of the laws of property and contract makes the legal system a kind of giant operating system; using the system in this manner constitutes a “legal hack.”
12
While it is technically superfluous to the software, the contract – contained within the copyrighted (and hence appropriated) source code of the program – is legally binding. It guarantees that anyone can do anything they want with the software,
except change this license
. They can use it, not use it, modify it, not modify it, and they can even sell it to a third person, provided that this third person is
willing
to pay, either for the software or for associated services.
For someone who can’t make heads or tails of software, the license may indeed seem superfluous: why would an ordinary user care whether or not the source code is visible or modifiable?
13
The answer is simple: because all software exists in an inherently heterogeneous, evolving environment of other software, hardware, devices firmware, operating systems, networking protocols, application interfaces, windowing environments, etc.. Software needs to be flexible. It must work not only in a particular setting, but it must continue to work as other aspects of the system change. In the case of proprietary software this creates an impossible situation – the user must rely on the corporation that
owns
the software to change or fix it and, regardless of her skill, is not
allowed
nor
enabled
to do so herself. Relying on a software corporation, as most users know, is at best a very uncertain proposition.
Free Software has grown up with the internet. Most of it is part of long and rich history of university-funded software and protocols that are open and freely available, primarily because they are created and funded by government and universities. Much of Free Software, such as the operating system GNU/Linux, is explicitly built on work done since the early seventies at DARPA, AT&T Bell Labs, the US Government, MIT, UC Berkeley, Carnegie Mellon, and the National Center for Supercomputing Applications. However Free Software is not in the “Public Domain” as most scientific data is presumed to be, or as are the basic protocols and standards of the internet and the web.
14
Rather, and as it may seem contradictorily, Free Software is
protected intellectual property that anyone can use
.
15
It is a form of commercially contracted openness-through-privatization and contributes to the creation of a commercially and legally legitimate self-reproducing public domain of owned property-- property anyone can use simply by agreeing to grant that right to any subsequent contractee. | https://lib.convdocs.org/docs/index-269386.html?page=2 | 2022-06-25T10:10:45 | CC-MAIN-2022-27 | 1656103034930.3 | [] | lib.convdocs.org |
You can view the events related to the BFD sessions.
In the enterprise portal, click.
To view the events related to BFD, you can use the filter option. Click the drop-down arrow next to the Search option and choose to filter either by the Event or by the Message column.
The following events occur whenever a BFD session is established to an Edge or a Gateway neighbor, or when the BFD neighbor is unavailable.
- BFD session established to edge neighbor
- BFD session established to Gateway neighbor
- Edge BFD neighbor unavailable
- Edge Incorrect local IP address in BFD configuration
- Gateway BFD neighbor unavailable
The following image shows some of the BFD events. BFD events.
| https://docs.vmware.com/en/VMware-SD-WAN/4.0/VMware-SD-WAN-by-VeloCloud-Administration-Guide/GUID-8480D1C0-B149-4283-A4EE-E40176504B6D.html | 2022-06-25T10:53:07 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['images/GUID-9A17588A-4622-4805-87BB-BBA216EBFA13-low.png', None],
dtype=object)
array(['images/GUID-71D6AC17-040B-444A-8428-E772A691BBCD-low.png', None],
dtype=object) ] | docs.vmware.com |
To specify the simulator language, type the following command in the Tcl Console:
set_property SIMULATOR_LANGUAGE <language_option>
[current_project]
The following table shows the simulation language properties, language, and simulation model where the property is applied.
Note: Where available, a Behavioral Simulation model always takes precedence over a Structural Simulation netlist. Vivado does not offer a choice of simulation model.
Note: The setting for the project property SIMULATOR_LANGUAGE is used to determine the simulation models delivered when the IP supports both Verilog and VHDL. | https://docs.xilinx.com/r/2021.2-English/ug896-vivado-ip/Tcl-Commands-for-Simulation | 2022-06-25T10:07:59 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.xilinx.com |
The Translation Component¶
The Translation component provides tools to internationalize your application.
Installation¶
You can install the component in 2 different ways:
- Install it via Composer (symfony/translation on Packagist);
- Use the official Git repository ().; use Symfony\Component\Translation\MessageSelector; $translator = new Translator('fr_FR', new MessageSelector());
注解
The locale set here is the default locale to use. You can override this locale when translating strings.
注解.
- YamlFileLoader - to load catalogs from Yaml files (requires the Yaml component).
2.1 新版功能: The IcuDatFileLoader, IcuResFileLoader, IniFileLoader, MoFileLoader, PoFileLoader and QtFileLoader were introduced in Symfony 2.1..
Loading messages can be done by callingml', locale;
- If it wasn’t found, the translator looks for the translation in the fr locale;
- If the translation still isn’t found, the translator uses the one or more fallback locales set explicitly on the translator.
For (3), the fallback locales can be set by calling setFallbackLocale():
// ... $translator->setFallbackLocale:
// ... $translator->addLoader('xliff', new XliffLoader()); $translator->addResource('xliff', 'messages.fr.xliff', 'fr_FR'); $translator->addResource('xliff', 'admin.fr.xliff', 'fr_FR', 'admin'); $translator->addResource( 'xliff', 'navigation.fr.xliff', 'fr_FR', 'navigation' );. | https://symfony-docs-zh-cn.readthedocs.io/components/translation/introduction.html | 2022-06-25T10:07:05 | CC-MAIN-2022-27 | 1656103034930.3 | [] | symfony-docs-zh-cn.readthedocs.io |
Turntable
Link to turntable
Basic Recipe
Link to basic-recipe
- Adds Turntable Recipe - inputs MUST have a block associated with them. The product state is the block that will be placed after the recipe finishes
ZenScriptCopy
mods.betterwithmods.Turntable.add(IIngredient input, IItemStack productState, IItemStack[] output); mods.betterwithmods.Turntable.add(IIngredient input, IItemStack[] output); //Examples mods.betterwithmods.Turntable.add(<minecraft:grass>, <minecraft:dirt>, [<minecraft:seed>]); mods.betterwithmods.Turntable.add(<minecraft:gravel>, [<minecraft:flint>]);
Removal by input
Link to removal-by-input
- Remove a recipe based on the input ingredient
ZenScriptCopy
mods.betterwithmods.Turntable.remove(IIngredient input);
Remove all
Link to remove-all
- Remove all recipes
ZenScriptCopy
mods.betterwithmods.Turntable.removeAll();
Remove by product
Link to remove-by-product
- Remove a recipe by the productState
ZenScriptCopy
mods.betterwithmods.Turntable.removeRecipe(IItemStack productState);
Builder
Link to builder
The Turntable has a recipe builder that allows more precise control over the recipes. All previous methods are simply short cuts to using the builder.
To create a new Turntable builder.
mods.betterwithmods.Turntable.builder()
Turntable methods
- Sets up the inputs and outputs of the recipeZenScriptCopy
buildRecipe(IIngredient[] inputs, IItemStack[] outputs)
- Sets the rotations required for the recipe to finish. This defaults to 8.ZenScriptCopy
setRotations(int rotations)
- Set the block that is placed when the recipe is finished.ZenScriptCopy
setProductState(IItemStack productState)
- Finalize the recipe and add it to the gameZenScriptCopy
build()
Example builder usage
Link to example-builder-usage
ZenScriptCopy
mods.betterwithmods.Turntable.builder() .buildRecipe([<minecraft:oak_fence>], [<minecraft:stick>*6]) .build(); | https://docs.blamejared.com/1.12/en/Mods/Modtweaker/BetterWithMods/Turntable | 2022-06-25T10:51:01 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.blamejared.com |
Set-Cs
Phone Number Assignment
This cmdlet will assign a phone number to a user or a resource account (online application instance).
Note: The cmdlet is currently not supported for customers and tenants that are or have been enabled for Regionally Hosted Meetings for Skype for Business Online. These customers should continue to use the Set-CsUser, Set-CsOnlineVoiceUser, Set-CsOnlineApplicationInstance, or Set-CsOnlineVoiceApplicationInstance cmdlets.
Syntax
Set-Cs
Phone Number Assignment -Identity <String> -PhoneNumber <String> -PhoneNumberType <String> [-LocationId <String>] [<CommonParameters>]
Set-Cs
Phone Number Assignment -Identity <String> -EnterpriseVoiceEnabled <Boolean> [<CommonParameters>]
Description
This cmdlet assigns a phone number to a user or resource account. When you assign a phone number the EnterpriseVoiceEnabled flag is automatically set to True.
To remove a phone number from a user or resource account, use the Remove-CsPhoneNumberAssignment cmdlet.
If the cmdlet executes successfully, no result object will be returned. If the cmdlet fails for any reason, a result object will be returned that contains a Code string parameter and a Message string parameter with additional details of the failure.
Note: In Teams PowerShell Module 4.2.1-preview and later we are changing how the cmdlet reports errors. Instead of using a result object, we will be generating an
exception in case of an error and we will be appending the exception to the $Error automatic variable. The cmdlet will also now support the -ErrorAction parameter to control the execution after an error has occured.
Note: Macau region is currently not supported for phone number assignment or Enterprise Voice.
Examples
Example 1
Set-CsPhoneNumberAssignment -Identity [email protected] -PhoneNumber +12065551234 -PhoneNumberType CallingPlan
This example assigns the Microsoft Calling Plan phone number +1 (206) 555-1234 to the user [email protected].
Example 2
$loc=Get-CsOnlineLisLocation -City Vancouver Set-CsPhoneNumberAssignment -Identity [email protected] -PhoneNumber +12065551224 -PhoneNumberType CallingPlan -LocationId $loc.LocationId
This example finds the emergency location defined for the corporate location Vancouver and assigns the Microsoft Calling Plan phone number +1 (206) 555-1224 and location to the user [email protected].
Example 3
Set-CsPhoneNumberAssignment -Identity [email protected] -EnterpriseVoiceEnabled $true
This example sets the EnterpriseVoiceEnabled flag on the user [email protected].
Example 4
Set-CsPhoneNumberAssignment -Identity [email protected] -LocationId 'null' -PhoneNumber +12065551226 -PhoneNumberType OperatorConnect
This example removes the emergency location from the phone number for user [email protected].
Example 5
Set-CsPhoneNumberAssignment -Identity [email protected] -PhoneNumber +14255551225 -PhoneNumberType DirectRouting
This example assigns the Direct Routing phone number +1 (425) 555-1225 to the resource account [email protected].
Example 6
Set-CsPhoneNumberAssignment -Identity [email protected] -PhoneNumber "+14255551000;ext=100" -PhoneNumberType DirectRouting
This example assigns the Direct Routing phone number +1 (425) 555-1000;ext=100 to the user [email protected].
Example 7
$pn=Set-CsPhoneNumberAssignment -Identity [email protected] -PhoneNumber "+14255551000;ext=100" -PhoneNumberType DirectRouting $pn Code Message ---- ------- BadRequest Telephone Number '+14255551000;ext=100' has already been assigned to another user
In this example the assignment cmdlet fails, because the phone number "+14255551000;ext=100" has already been assigned to another user.
Parameters
Flag indicating if the user or resource account should be EnterpriseVoiceEnabled.
This parameter is mutual exclusive with PhoneNumber.
The Identity of the specific user or resource account. Can be specified using the value in the ObjectId, the SipProxyAddress, or the UserPrincipalName attribute of the user or resource account.
The LocationId of the location to assign to the specific user. You can get it using Get-CsOnlineLisLocation.
Removal of location from a phone number is supported for Direct Routing numbers and Operator Connect numbers that are not managed by the Service Desk. If you want to remove the location, use the string value null for LocationId.
The phone number to assign to the user or resource account. Supports E.164 format like +12065551234 and non-E.164 format like 12065551234. The phone number can not have "tel:" prefixed.
We support Direct Routing numbers with extensions using the formats +1206555000;ext=1234 or 1206555000;ext=1234 assigned to a user, but such phone numbers are not supported to be assigned to a resource account.
Setting a phone number will automatically set EnterpriseVoiceEnabled to True.
The type of phone number to assign to the user or resource account. The supported values are DirectRouting, CallingPlan, and OperatorConnect. When you acquire a phone number you will typically know which type it is.
Inputs
None
Outputs
System.Object
Notes
The cmdlet is available in Teams PowerShell module 3.0.0 or later.
The cmdlet is only available in commercial and GCC cloud instances.
If a user or resource account has a phone number set in Active Directory on-premises and synched into Microsoft 365, you can't use Set-CsPhoneNumberAssignment to set the phone number. You will have to clear the phone number from the on-premises Active Directory and let that change sync into Microsoft 365 first.
The previous command for assigning phone numbers to users Set-CsUser had the parameter HostedVoiceMail. Setting HostedVoiceMail for Microsoft Teams users is no longer necessary and that is why the parameter is not available on Set-CsPhoneNumberAssignment.
Related Links
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/powershell/module/teams/set-csphonenumberassignment?view=teams-ps | 2022-06-25T11:38:56 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.microsoft.com |
EOL: SentryOne Test reached its end of life date on June 15, 2022. See the Solarwinds End of Life Policy for more information.
Cloud actions are responsible for gathering data by connecting to Salesforce or other sites that provide a REST API. Each action can fulfill a unique roll within a test:
- Prepare tests by ensuring the expected results are up to date.
- Clean-up an environment before or after a test execution.
Execute REST Query (grid)
Settings Tab
Headers Tab
Cursor / Paging Settings
Test Results Tab
Select Test to see the results returned based on the configuration in the previous tabs.
Execute REST Query (scalar)
Settings Tab
Headers Tab
Test Results Tab
Execute Salesforce Query Grid
Execute Salesforce Query Grid Editor
Execute Salesforce Query Scalar
Execute Salesforce Query Grid Editor
Load File From Cloud Provider
Load Grid From Cloud Provider
Settings Tab
File Format
Preview
| https://docs.sentryone.com/help/sentryone-test-cloud-actions | 2022-06-25T10:17:22 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c62f82d6e121c717b15e103/n/s1-test-execute-rest-query-grid-settings-tab-20185.png',
'SentryOne Execute Rest Query Grid Settings Version 2018.5 SentryOne Test Execute Rest Query Grid Settings'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c62f951ad121c6f7ec56f0b/n/s1-test-execute-rest-query-grid-headers-tab-20185.png',
'SentryOne Test Execute Rest Query Grid Headers Version 2018.5 SentryOne Test Execute Rest Query Grid Headers'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c62f98dad121ce87bc56f4e/n/s1-test-execute-rest-query-grid-cursor-tab-20185.png',
'SentryOne Test Execute Rest Query Grid Cursor Paging Settings Version 2018.5 SentryOne Test Execute Rest Query Grid Cursor Paging Settings'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c62f9c98e121cc3067784ad/n/s1-test-execute-rest-query-grid-test-results-tab-20185.png',
'SentryOne Test Execute Rest Query Grid Test Results Version 2018.5 SentryOne Test Execute Rest Query Grid Test Results'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c62fb1f6e121c0b7a15e1db/n/s1-test-execute-rest-query-scalar-settings-tab-20185.png',
'SentryOne Test Execute Rest Query Scalar Settings Version 2018.5 SentryOne Test Execute Rest Query Scalar Settings'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c62fbf68e121cc5067784ed/n/s1-test-execute-rest-query-scalar-headers-tab-20185.png',
'SentryOne Test Execute Rest Query Scalar Headers Version 2018.5 SentryOne Test Execute Rest Query Scalar Headers'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c62fc3aec161c9275683d4b/n/s1-test-execute-rest-query-scalar-test-results-tab-20185.png',
'SentryOne Test Execute Rest Query Scalar Test Results Version 2018.5 SentryOne Test Execute Rest Query Scalar Test Results'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c6304d78e121c680f778482/n/s1-test-execute-salesforce-query-grid-properties-20185.png',
'SentryOne Test Execute Salesforce Query Grid Properties Version 2018.5 SentryOne Test Execute Salesforce Query Grid Properties'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c63064a6e121ce07d15e10f/n/s1-test-execute-salesforce-query-scalar-properties-20185.png',
'SentryOne Test Execute Salesforce Query Scalar Properties Version 2018.5 SentryOne Test Execute Salesforce Query Scalar Properties'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c8926acec161c050cd3c496/n/s1-test-load-file-from-cloud-provider-20185.png',
'SentryOne Test Load File from Cloud Provider Element Editor Version 2018.5 SentryOne Test Load File from Cloud Provider Element Editor'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c89265dec161cfc0bd3c48c/n/s1-test-load-grid-from-cloud-provider-20185.png',
'SentryOne Test Load Grid from Cloud Proivder Element Editor Version 2018.5 SentryOne Test Load Grid from Cloud Provider Element Editor'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c630a1aad121c0e09c56eb4/n/s1-test-load-grid-from-cloud-provider-file-format-20185.png',
'SentryOne Test Load Grid from Cloud Provider Element Editor File Format tab Version 2018.5 SentryOne Test Load Grid from Cloud Provider Element Editor File Format tab'],
dtype=object)
array(['http://dyzz9obi78pm5.cloudfront.net/app/image/id/5c8924cd6e121c151d186521/n/s1-test-load-grid-from-cloud-provider-preview-20185.png',
'SentryOne Test Load Grid from Cloud Provider Element Editor Preview Version 2018.5 SentryOne Test Load Grid from Cloud Provider Element Editor Preview'],
dtype=object) ] | docs.sentryone.com |
Hau to do things with Words
Скачать
277.26 Kb.
Название
Hau to do things with Words
страница
4
From Free Software to Open Source
The story of the Free Software Foundation (FSF) has been told over and over again, and the stories continue to appear.
21
It was founded by Richard Stallman in 1983 and existed in more or less the same small-scale form until about 1991. Between 1991 and 1997, with the explosion in size and access to the internet, its vision and its software started to reach a much larger audience. As a sub-cultural phenomena, it was generally ignored or under-reported until about 1998, when something significant happened.
In January 1998, Netscape publicly conceded defeat to Microsoft in the “browser wars.”
22
Netscape was already famous for its startling behavior: giving away Netscape for free and thereby forcing Microsoft to do the same, and holding a media-saturated initial public offering with a business plan that had no clear revenue model. In response to Microsoft's victory in browser space, Netscape decided to up the ante: they would release the source code to Netscape – now pronounced Mozilla – and make it a Free Software browser. The point at which they decided to do this follows closely—or so the anecdotal story goes—on the heels of a marketing meeting where members of management had invited Eric Raymond to come and talk about CatB and the dynamics of Free Software development. Apparently, Raymond convinced them to make the source code available.
Immediately following this, on February 3, 1998, Eric Raymond announced that he will no longer use the name “Free Software,” but instead would call it “Open Source Software.” No one (except the CIA) used the phrase "open source" prior to this date.
23
Raymond's justification was that in order to make a better case to potential business users it was necessary to avoid using the word "free". Apparently the use of the word Free – which was intended to mean Freedom – had baffled businessmen, and had led venture capitalists to assume that Free Software was not a legitimate aspect of the business world but rather a hobby for nerds or, worse, a hotbed of communist organizing.
At this point, Raymond joined with Bruce Perens, a long time Free Software advocate and member of the volunteer organization that created the distribution of Linux known as Debian, to create the Open Source organization. They took a document written by Perens and called the "Debian Social Contract," and converted it with minor changes into the "Open Source Definition." As the Open Source organization, they issued press releases that featured Linus Torvalds and Eric Raymond promoting the “open source” strategy.
Raymond and friends had by 1998 recognized that the
commercial
internet depended – uncharacteristically perhaps – on
Free
Software. System administrators everywhere, even inside Microsoft, were using Free Software like Apache, Perl, Bind, and Linux.
This was largely because the internet was a new, and for most businesses—especially Microsoft—unfamiliar technical space, whereas much of Free Software is actually designed with the internet in mind. Beyond that, most of the people closest to the machines—such as system administrators and networking specialists agreed that Free Software is faster, more stable, more configurable, more fault tolerant, more extensible, cheaper, easier to get almost anywhere in the world, less buggy, comes with a worldwide network of support, and well, it just
works
24
. Internet pioneers like Amazon and Yahoo would never exist without the work of the Free Software community, and it was clear to Raymond that the time was ripe to do something proactive about it.
For Raymond, this meant something very specific. Hackers should strategically repudiate the name “Free Software,” and especially any reference to Stallman’s rhetoric of freedom. To Raymond, Richard Stallman represented not freedom or liberty, but communism. Stallman's insistence, for example, on calling corporate intellectual property protection of software "hoarding" was doing more damage than good in terms of Free Software's acceptance amongst businesses. So, riding the rising tide of third wave capitalism and e-everything pre-millenarian madness, Raymond’s response was to expunge all reference to freedom, altruism, sharing, or any political justification for using free software. Instead, he suggested, hackers should promulgate a hard-nosed, realist, cost-cutting, free-market business case that free software was simply better—and more economically efficient as a result. Capitalism had triumphed, the future was determined, it was all over but the shouting. No politics, just high quality software – that was the deal.
Raymond’s intuition was right. That is to say, “Open Source” did prove to be a better name—from the perspective of popularity if nothing else. It highlighted the importance of the source code instead of the issue of Freedom. While Raymond’s justifications for the change were somewhat suspect—perhaps tied to the marketing concerns of Netscape—the fact is that the combination of Netscape’s announcement, Raymond’s article, and the creation of the Open Source organization led to massive, widespread industry interest in Open Source, and eventually, in the spring and summer of 1999, to major media and venture capital attention. Previously little-known companies such as Red Hat, VA Linux, Cygnus, Slackware, and SuSe, that had been providing Free Software support and services to customers suddenly entered media and business consciousness.
25
Over the last two years several large corporations, like IBM, Oracle, and Apple, have decided to support various Open Source projects.
Raymond and Open Source achieved a kind of marketing revolution—indeed Bruce Perens (who subsequently resigned) now refers to the Open Source organization they founded as "a marketing tool for Free Software." It is perhaps unclear whether Open Source would have been the most recent "next big thing" if it were still called Free Software, but the fact is that the name, the idea, and some of the money that came as a result, has stuck. Something forgotten amidst this marketing maelstrom is that Open Source did actually have a unique, specific, material goal in mind. The Open Source organization and the Open Source Definition decided to change one thing: they would offer a “certification mark,” to protect software that is “authentically” open source.
26
This is somewhat peculiar, even if it seems eminently reasonable at first glance. As we have seen, legally speaking, the only thing “authentic” about Open Source – the only thing that distinguishes it both
legally
and
technically
from proprietary software – is the Free Software license itself. Without it, it is just copyrighted code. Such code
could
be trademarked, as Windows98Ô is because Microsoft, as a legal entity, owns both the copyright and the trademark. Open Source software, on the other hand, may be owned by someone, but the Open Source organization—even as a legal entity—cannot trademark it unless it
is
that owner. So instead, the Open Source organization can only offer a certification mark that represents their guarantee that software so marked is actually Open Source software— i.e. it contains a Free Software license.
Note that the Free Software Foundation and the Open Source Organization recognize more or less the same list of licenses.
27
This means, then, that an open source certification mark can have no other purpose than to certify that the software is in fact
licensed correctly,
which despite Raymond’s non-political revolution, is precisely what the Free Software Foundation always insisted on doing—albeit in a ethical voice, not through the power of trademark law. After all, the one necessary condition for software to be Free is the Free Software license. Vague and indeterminate definitions of “Free Software” or “Open Source” are open to endless debate, but its licenses are exact, fragile, legal instruments that achieve a precise political maneuver within a particular institutional milieu. So in practical terms, there is no difference between calling it Free Software and calling it Open Source software—it's all made of the same stuff. Inasmuch as Open Source is simply a better marketing term for the same stuff, it cannot actually function that way in legal terms. Whereas Pepsi and Coke have tremendous amounts of value tied up in protecting their brand, and in owning the trademark, Open Source is not a for-profit corporate entity which produces an excludable product, and therefore cannot exclude anyone from using the name. This means that Microsoft can legally call their software "open source" if they want, but they won't be
certified
by the Open Source Organization to do so. I belabor this point because it is rare to find a situation where the choice between two equivalent names, and the difference it makes, can be seen so clearly: for those who value the political import of Free Software, the name Open Source mortgages the future of that political goal. For those who value the wide and popular acceptance of Open Source, the name Free Software sabotages the possibility of including the broad range of pragmatic software users, regardless of political "ideology". It is clear in this case, though, that it is not just what the words mean that matters, but it is also what they
do
that matters—and this is an issue of the techno-legal framework of contemporary society.
There is, however, another part to this story, and another reason why some people use Open Source, and some Free Software. This has less to do with the precision of law, and more to do with several important empirical and historical developments that converge in the 1990s. For Eric Raymond licenses like the GPL, and associated trademark and copyright issues, are a secondary and less important part of the story. The existence of the GPL is a necessary but not a sufficient condition for what Raymond
wants
the new term “Open Source” to mean:
the distributed, cooperative, evolutionary design of high quality software by non-corporate organizations of independent developers.
Open source development, defined thus, is what fascinates Raymond—licenses are important, but they are not the heart of the matter. The heart of the matter is that a bunch of volunteers, with asynchronous access to openly available source code can build a highly complex piece of software in the absence of any explicit corporate management. Raymond proselytizes for a better bug-trap, a new software development model and this is what CatB is all about: make it open source and as a result of the evolutionary distributed dynamics of Open Source, it will simply be better software than if you close it up and let only one company develop it. And so, you don’t need political justifications to convince people—the software will sell itself.
Because Raymond had been a fervent supporter of , and long time participant in Free Software and because he is also a committed amateur anthropologist, Raymond has written a very widely read, occasionally convincing set of explanations for how, and even
why
, it works so well. It is worth revisiting the details of these two articles: the first (CatB) sets out certain questions about the dynamics of cooperation, exchange, and the technical side of software development. The second (HtN) contains Raymond's anthropological explanation of property customs and reputation allocation for Hacker Tribe that he calls a "gift culture".
The Cathedral and The Bazaar
In “The Cathedral and the Bazaar” (Raymond, 1997), Raymond proposes an experiment
28
to prove that the evolutionary dynamics of open source development (the Bazaar) are more efficient than those of the in-house hierarchical software development model (the Cathedral). The article reviews the development of the Linux kernel started by Linus Torvalds, and proposes the fetchmail mail-transfer agent maintained by Raymond as a confirmation of the dynamics of Open Source. The goal of his paper is to identify the difference between the closed software firm model, and an open, distributed, collaborative model. The difference he identifies is the role of debugging. In most cases of software development, the code is designed, the data structures and flow of the program specified, and then a version built, usually by a small number of people. The next step is debugging and in the Cathedral model, claims Raymond, the same small handful of people are assumed to be the best bug-finders, and therefore only they get to see that code. But the way Linux was developed, the code was always open and anyone could look for bugs at anytime during its development: .
Linus was". (Raymond, 1997, Section 4).
Note that Linus’s Law is not a law of how cooperation works, it is only a description of a particular software development technique. That is, it does not focus on
why
people contribute to such projects; rather, it lays out what should be done to achieve this contribution. Two related aspects of the model are important here: 1) users
are
developers and, 2) everyone deserves some
credit
for helping. According to Raymond, if these two conditions are satisfied, the software will not succumb to deep, insidious design flaws but rather become steadily more robust, fault tolerant, flexible, and useful to a wide range of people.
Recognizing that users are developers can be a liberating
Gestalt
switch: the fact that there are now only more and less skilled users leads to a remarkably egalitarian and socially conscious form of design. If developers are users, it is no longer possible to despise the user, or to create software that “hides complexity” in such a way that the user cannot subsequently recover it. Indeed, if users are developers, precisely the opposite is true: it must be assumed that the user’s desires are inscrutable, and that she may therefore need to get under the hood
herself
in order to satisfy them. Likewise, distributing the credit for the software as widely as possible, and taking pains to make users feel like important co-developers by crediting them in the software and including them in the mailing-lists and development discussions, leads to a remarkably cooperative development system.
Taken together these factors form a normative management theory about how to design software – and a very good one at that. This theory, however, does not depend on the existence of the Free Software licenses; it would be entirely possible for this kind of system to operate in a large proprietary software firm, where everyone can see the code and everyone comes to the development meetings—implementing such a management theory on the internet is just a question of scale.
29
Within any given software development firm, the boundaries of the organization would be determined by intellectual property rights—e.g. who can see the source code and who cannot, including such instruments as non-disclosure agreements and limited licensing agreements—and by the information infrastructure—e.g. open or closed mailing lists, access to the code-tree and rights or access to fix bugs or add features. On the internet, all of these issues are organizationally wide open: Free Software licenses allow everyone (anyone covered by contract law) to be a developer, mailing lists are almost always open (though this varies from project to project and depends on the stage of the project—from planning to debugging), and the code-tree is almost always managed in such a way that anyone can submit changes, bug-fixes, ideas. If those ideas are not accepted (say, e.g. Linus rejects your wacky idea for implementing garbage collection in the kernel), then you can take the code and make you own project, and try to attract your own co-developers. There is no management to stop you, and there are no exclusive intellectual property rights to prevent you from doing so.
As Raymond also notes, the other condition that was necessary for this kind of development to occur—alongside the Free Software licenses – was the explosion of access to the internet that occurred around 1993 as a result of the phenomenal growth of commercial Internet Service Providers which extended internet access outside of government and academia, to businesses and individuals: this expansion of the internet made Linux possible—
as a distinctly new species of Free Software
.
The new technical communication situation of the commercial internet—which has its own history and its own set of necessary conditions—produced a radical change in the [ enormously increased ] number of potential contributors to a project like Linux, and as a result, actually increased the eagerness of users to participate in the improvement of software to which they would then have ready, unconstrained access. Users of GNU software (GNU is a recursive acronym: "GNU’s Not UNIX") and other Free Software projects that existed prior to 1993 had been largely academic. The net, such as it was, was neither big nor fast enough to support the collaborative development of a project as complex as an operating system—though this didn't stop the FSF from trying. The FSF was initially founded with the goal of creating an operating system called GNU which would use a "kernel" called "HURD". Hurd has never been successfully completed and what is now commonly referred to as "Linux" began only as a separate kernel, created by Linus Torvalds. Many of the other components of the GNU operating system—such as a compiler, debugger, editor and various tools—were already complete and thence became part of the GNU/Linux Operating System when Linus Torvalds offered the kernel. The resulting explosion of development and integration only took off after 1993.
Raymond's claim in CatB is that the creation of GNU/Linux along with several other, but smaller projects, is an innovation in the process of software development—a better bug-trap. And innovation being what it is to business, such a technique should interest them. Though he doesn't say it, the implication of CatB for the Software industry is that the Open Source Development model is actually a way of using the internet as a system for the efficient allocation of highly skilled labor. The emergence of Linux signals not only a profound challenge to intellectual property, but perhaps more significantly to existing systems of management, hiring, and human resources. If software developers can pick their projects, fix their bugs, and write their features without having to go through management, then the role of management shifts to the person of the project maintainer.
But there is one problem with this understanding of Open Source: almost no one is actually
paid
to write Free Software. It is only in a metaphorical sense that one can currently call the work on Free Software development “employment,” since the majority of developer-users are doing it alongside or outside of their official, paying corporate jobs—not as officially employed Free Software programmers
30
As Open Source catches on in the business world—and it has to a rather phenomenal extent—then the question of the precise mode of remunerating people for their labor becomes a conundrum: why do such highly skilled, employable people create software that they give away without any kind of monetary remuneration? With respect to developers' legal rights, and the actual accountability of software firms to individuals, Open Source could look more like profound exploitation than massive, voluntary, distributed software development.
Raymond is of course aware of this problem and attempts to answer it without any reference to the actually existing world of legal regimes—that is, as an anthropologist for whom informal conventions, and the actual behavior of individuals is more important than the normative sphere of national and international legal structures. Remuneration in this sense, might not simply mean cash-payment, but something less calculated, something he prefers to call a "gift culture" in which informal taboos actually structure the behavior of people more effectively than national or international law.
The following section looks briefly at his explanation. As an anthropological investigation of customary behavior, it is extremely useful; but by strategically ignoring the role of actually existing law—which it should not be forgotten is an essential component of Free Software itself—Raymond misses the opportunity to elaborate the overlap and relationship between the observed customary behavior of hackers and the legal and economic obligation that enfolds them.
On the analogy with anthropology, Raymond sees Hackers like anthropologists once treated the Cuna, or the Trobrianders—as an isolated, functioning societal unit with easily identifiable borders, almost fully disconnected from any legal, economic, or historical realities that structure the contemporary global orders of society. In the case of anthropology, such an assumption has proved impossible to sustain—and it should be even more so in the case of "The Hacker Tribe" which is so firmly and obviously at the center of that legal, technical, and social seat of power. Both the Trobrianders and Hackers exist within overlapping systems of formal legitimate legal systems and informal conventional systems of behavioral regulation. The task that remains, and that Marcel Mauss initiated under the sign of
The Gift
is to develop an understanding of how the informal taboos and conventions of a given network relate to the formal legal technical structures of property, contract and exchange: it is not a question of communities or societies which have one, and not the other, but a question of how they function together to form an operating system that its inhabitants can and do manipulate.
Homesteading the Noosphere
“The ‘utility function’ Linux hackers are maximizing is not classically economic, but is the intangible of their own ego satisfaction and reputation among other hackers ...... We may view Linus’s method as a way to create an efficient market in ‘egoboo’ – to connect the selfishness of individual hackers as firmly as possible to difficult ends that can only be achieved by sustained cooperation.” (Raymond, 1997: Section 10).
This quotation from CatB is oft-used in discussions of how voluntary hacker and online cultures function; it suggests that Linus' method—the constructive channeling of the work of many volunteers into a single well-defined goal
31
—is analogous to a spontaneously forming, self-correcting market or ecology.
For Raymond, all possible spontaneous systems are the same: economic markets, ecologies, adaptive systems in biology, the "Delphi" effect—there are references throughout CatB. The minimum characteristics are that all such "self-correcting" evolving systems function the same way: an identifiable and self-directed agent maximizes its utility (or value, or X) through self-interested choices (those choices that lead to a net increase in X); a sufficiently large number of agents doing this in the same system leads not to chaos but to complex, differentiated organization capable of sustainable equilibrium. X in Raymond's case is "egoboo" which he makes synonymous with both "ego satisfaction" and reputation. Raymond does not address what the analogue of equilibrium might be.
The simplification to a quasi-algorithmic description is fine—such mechanisms are entirely common. However, his description of its actual mechanics and context—which might set it off from other market mechanisms or other biological systems—lacks in detail. What should be interesting for such a description are the qualities of and the rules for egoboo maximization; the structure, constraints (either conventional or legal) of the "market" – which requires, as Raymond broadly puts it, “a medium at least as good as the Internet” to function; the necessary qualities of the key figure (Linus, in this case) who we assume must serve some function (he has a "method" after all, so it can't be all madness) in organizing the agents in the market; or even the meaning of stability or equilibrium in this example. For Raymond, there appear to be only two systems in the world, complex adaptive evolutionary "bazaars", and hierarchical, authoritarian corporate "cathedrals". His strategic optimism of course favors the former: proprietary software “cannot win an evolutionary arms race with open-source communities that put orders of magnitude more skilled time into a problem” (Raymond 1997, Section 10).
Raymond specifies some of these qualities in his second paper, “Homesteading the Noosphere” (HtN). It is in HtN that Raymond is at his most anthropological, both in terms of his observations (which are extraordinarily perceptive and valuable) and in his attempts at theorization (which are not). His principal problem concerns the nature of property and space, signaled in the title of the piece: the Noosphere is the "space of all ideas" and hackers are the ones who are homesteading it.
Though Raymond doesn't make it explicit, it is clear that through the metaphor and mantle of anthropology two assumptions are rendered possible: first is that the borders of the Hacker tribes' space is identifiable and separate from that of contemporary modern society (i.e. the "real world"); the second is that Hackers, as an identifiable and coherent group, have a different, legitimate and perhaps incompatible notion of what property is. Anthropology is not innocent in rendering possible such assumptions. Most anthropologists continue to treat indigenous peoples as identifiable and separate based not on "real world" distinctions (e.g. reservations or "homelands" created by present day sovereign nations) but on some combination of kinship, language, culture, and biology. Likewise the debates over the incompatibility of property regimes (see Brown 1998) strengthen the assumption that such separateness either does or should exist and should therefore be equally legitimate. In the case of Raymond's anthropology, however, neither of these assumptions hold, but his use of them does in fact reveal very significant details about the behavior of software developers who contribute to software projects.
What I insist is worth paying attention to in Raymond's explanation are the particularities of the relationship between material and immaterial ideas, between writing as a representation of ideas, and writing as a thing in itself—words that do things. Raymond makes use of four spaces in his article: Noosphere, ergosphere, cyberspace and "the real world". Raymond explains the distinctions thus:
The 'noosphere'... is the territory of ideas, the space of all thoughts. What we see implied in Hacker ownership customs is a Lockean theory of property rights in one subset of the noosphere, the space of all programs... [Faré Rideau ] asserts that what hackers own is
programming projects
—intensional focus points of material labor (development, service, etc.)... He therefore asserts that the space spanned by hacker projects is
not
the noosphere but a sort of dual of it, the space of noosphere-exploring program projects [ergosphere].
…And the distinction between noosphere and ergosphere is only of
practical
importance if one wishes to assert that ideas cannot be owned, but their instantiations as projects can.... (Raymond 1998, Section 5)
It is clear from this quotation that the Noosphere, as Raymond understands it, is not like land. Despite his dependence on Locke's philosophy, which was eminently concerned with land (especially certain stretches of good tobacco-growing land in the colonies),
32
Raymond's Noosphere consists of non-excludable, non-material ideas, which can take the particular form of a programming project (an intangible, but perhaps not immaterial thing). Therefore a hacker can
own
this idea in a sense similar to the way a scientist might
own
a research agenda,
33
i.e. it is his only to the extent he can convince others near it not to trespass, either by force or by charm.
What is unclear here is
how
precisely the boundaries of an idea are drawn. Some version of expertise that is shared amongst hackers needs to be already in place: a hacker needs to know how to find, understand, and evaluate what other hackers are working on. There is no equivalent to a fence, a stone wall or a no trespassing sign: rather a hacker is expected, by learned and evolving informal conventional means, to know who owns what.
Of course, the interesting aspect of this proposition is that in the world of hackers and developers, such knowledge of who owns what is relatively robust, even if they cannot articulate exactly
how
they know who owns what. Meanwhile, the actual code that people produce, share, download, archive, compile and run, is in fact
explicitly
(i.e. as part of the code
itself)
identified by a copyright, a name or list of names and occasionally an email or address—and therefore owned in a legal and non-tacit sense. The copyright in the "real-world" represents ownership of the code, even if the idea in informal conventional terms, is understood to be owned by someone else.
Compare this with the division that exists in the "real-world" system of intellectual property. Patents represent ideas, copyrights cover specific materially existing chunks of text. Both must take an explicit written form, though the former is presumed to
represent
the idea, the latter to
instantiate
it: holding a patent means owning an idea, holding the copyright means owning a particular instantiation of the idea—or simply some words on a page. In Raymond's Noosphere the mode of ownership of ideas (i.e. the patent) is
reputation.
Reputation is the proxy for the idea in the same way that the patent specification is the proxy for the idea. But the mode of ownership of the
instantiation
is still copyright: Free software is in fact protected by copyright law, not by reputation or any other non-material patent-like stuff. This creates an opposition between two spaces: the Noosphere, an imagined communal space where reputation is recognizable but not apprehensible and cyberspace which is where both the written programs and the evidence of reputation (the markers, the discussion, perhaps the
sense
of that reputation) reside.
Hacker taboos.
Keeping this set of comparisons in mind, it is illuminating to look at the three basic informal taboos that Raymond has identified in the Hacker Tribe
34
. His point, in HtN, is that these informal taboos are in fact in "contradiction" with the explicit licensing practices. The contradiction, however, depends on whether or not the realm of informal conventional reputation is seen as part of the same space as formal intellectual property rights. Since Raymond strategically denies the importance of mere mundane legal rights, and substitutes his speculative "gift culture", these differences appear as contradictions. The three taboos are:
1) There is strong social pressure against forking a project. It does not happen except under plea of dire necessity, with much public self-justification, and with a renaming.
2) Distributing changes to a project without the cooperation of the moderators is frowned upon, except in special cases like essentially trivial porting fixes.
3) Removing a person’s name from a project history, credits or maintainer list is absolutely
not done
without the persons explicit consent. (Raymond 1998, Section 3).
In any given programming project, there is peer pressure against taking the code of the project, and starting a new project. The owner of that original idea accrues reputation in various ways: perhaps through the creation of the initial code-base for the program, through the continued maintenance of the project, or through the management of contributions, changes, or suggestions. Forking a project is discouraged because it dilutes the identity of the project, and could potentially divert reputation from the "owner" of the project. However, forking is precisely what the licenses guarantee
must
be possible in order for software to be copy-lefted.
Compare with the patent. The holder of the patent has absolute rights over its use or reuse. Using a patented idea requires licensing it from the owner. In the Free Software world, however, such a condition has been dispensed with via the hack of copyleft. No owner of a piece of software can prevent you from reusing it—but neither can its reuse be prevented from re-incorporation in the original project.
35
Forking a software project amounts to the creation of new, equally free, but potentially incompatible versions of the same software.
36
More importantly, it diminishes the brand identity of a single project by giving it
competition
. What Raymond is suggesting with his property analogy is that reputation functions similarly to patent: it grants a limited monopoly, and discourages competition in order to channel reputation—the incentive in Raymond's world—to the owner of the idea. Raymond's free market in ideas is in fact
regulated
by informal conventions, in the same way the real market in intellectual property is regulated by IP law.
The second taboo is essentially the same as the issue of forking, but serves to regulate the behavior of people such that some entity (either a group—the Apache group—or an individual—Linus Torvalds) maintains control over managerial decisions. Authority must emerge somewhere, and it does so through the existence of informal taboos against the anarchic distribution of changes to software. Here the comparison with patent and copyright is apposite and overlapping: the role of patent and copyright is not only to exclude competition from the market for a limited time, but to recognize rights to decide who can or cannot use the intellectual products and for which purposes.
Again, patching software (e.g. the Linux kernel) and releasing the patched version publicly is exactly what the Free Software Licenses are designed to allow. The existence of this convention implies that, for example, the subsequent kernel will not be named “Linux” until Linus or someone else in the hierarchy approves it and incorporates the new code. In fact, interestingly enough, Linus Torvalds holds the trademark to the Linux name—suggesting that even deep within the Noosphere, the regular old real world intellectual property system is functioning to protect the reputation of individuals.
The third taboo is also interesting from a comparative perspective. It suggests that reputation actually depends on its explicit recorded form (what I have called in a separate paper
greputation
37
)
. If you are not a project maintainer, but just an aspiring bug-tracker, then your rise in the ranks is dependent on the explicit appearance of your name in the record. In patent and copyright law, the entire range of contributors is rarely given credit (patents more so than copyrights) and the purpose and goal of making these products into property is
to make them alienable
: to provide the ability to erase one name and replace it with another, given an appropriate transfer of some proxy for value (usually: enough money). For Raymond, contributor lists are an informal redistributive mechanism: they portion out some of the reputation that accrues to, say Linus Torvalds, and distribute it to people who have written device drivers, or modules or other less glamorous additions to the Linux kernel. Again, it results in a "contradiction" because in Free Software licenses, the only name that legally matters is that of the original copyright holder—and this constitutes a market failure, so to speak, that requires a redistributive mechanism which hackers have developed to correct for it.
I should be clear that this comparison with real world intellectual property does not exist in Raymond's explanation—he in fact refers to the very specific legal issues as "hacker ideology" and reduces actually existing license issues to "varieties of hacker ideology (Raymond 1998, Section 2)." This strategic denial of law and politics is necessary in order to observe the Hacker tribe as it exists in "nature"—the pure realm of the Noosphere where: “Lockean property customs are a means of
maximizing reputation incentives
; of ensuring that peer credit goes where it is due and does not go where it is not due.” (Raymond, 1998).
This formulation, which is clearly intended to have the force of scientific law, is incorrect for a couple of reasons, but nonetheless points to some of the more interesting implications of Raymond's observations.
First, the property customs he identifies could more accurately be described as a mechanism to minimize disputes and adequately credit co-developers in a context
outside of any given firm
. Since the bargain of Open Source is that the internet is its medium, and the internet is not a corporate form, dispute resolution needs to take some other form—informal conventions governing idea ownership are perhaps one successful way
38
.
Second, these customs do not "maximize reputation incentives." Rather, they are an expression of one optimized design for a structure that would maximize net gain in reputation. One way this is done, outside of deliberate human thought, is through the social enforcement and gradual pragmatic evolution of conventions such as those Raymond identifies.
39
Furthermore it is not the
incentive
that governs where reputation goes, but rather the
mechanism
of the property conventions themselves. The incentive, such as it is, can only be the
expectation
of what reputation will bring: for example, the power to decide over and maintain a project and to resolve disputes about it. The incentive could also include personal satisfaction, reputation spillover into the "real economy", or simply any subjectively valuable return on the investment of contributing. Everything hangs on what is understood by the term "incentive" here.
One cannot create an “incentive structure” in the sense that economists use that term, without a
measurable
return. And as reputation remains un-measurable, it is not a suitable incentive for such a structure—it remains a metaphor. Indeed, Raymond has identified conventions, which from his extensive experience, actually exist—but there is no evidence that these conventions actually concern reputation, which is an extrapolation on Raymond's part.
However, reputation could be an incentive in a less exact, metaphorical or less material sense: as that return which people
expect
to receive based on their knowledge of the past and their understanding of the structure within which they operate. In this sense, it is part of a structure of reciprocity and obligation whose material substrate is not simply money, but
language
. Or put inversely, the function of money—as a one dimensional measure of trust—can also be served by language. Words do matter then, because they are the medium of reputation, and hence of trust in this system—this community—of individuals who give free software to each other and pay in compliments.
How to pay with words.
In Raymond's version reputation—unlike money—has differentiated and specific qualities. Whereas money has a single dimension, reputation might indicate any of a range of things: skill, elegance, cleverness, usability, sophistication, training, experience—what is sometimes wittily summarized as "code-Fu". Accordingly, in Raymond's Noosphere, reputation is a better way of both incenting and crediting the authors of code than simply paying them.
But reputation is hard to understand. It is a subtle and incomplete calculus that allows a reputation to form. Raymond likes to insist that good code is obvious because "it works," but this simply passes over the details of how a reputation is formed—much less what it means for code to work.
I would suggest something very mundane here. The way in which reputation is formed—the “allocation mechanism” of reputation—is only
the speech of the participants,
i.e. the things they say to each other to bring each other into line, on line. The lurking romantic author in the world of Hacker software creation may eventually come out to insist that only geniuses—divine beings—write good code. For the rest of us, however, the recognition of reputation is learned, and is a function of trusting what people who other people trust say about themselves and others—it is a hermeneutic and practical experience of reputation.
40
Raymond observes the behavior of hackers, captures the practical essence of this activity, and translate it into rules: don't fork projects, don't distribute rogue patches, don't erase people's names.
In fact, Raymond himself has done more to identify and make explicit these evolving conventions than anyone else; they are nowhere articulated more explicitly than in his published work. It should not be surprising then, that at the end of HtN, Raymond makes a very interesting normative suggestion:
[Hackers should] develop written codes of good practice for resolving the various disputes that can arise in connection with open-source projects, and a tradition of arbitration in which senior members of the community may be asked to mediate disputes... The analysis (Raymond, 1998, section 20).
Raymond has effectively proposed what he has already identified as functioning: informal codes adopted by people to manage the direction and control of projects. His identification of the existing codes as "implicit" suggests that people act this way without ever saying anything to each other—but such an assumption is unsupportable. Raymond's specific formulation of them as taboos, in classic anthropological fashion, makes them into regulating rules which the participants themselves rarely recognize (Raymond 1998, Section 2). Raymond suggests that presenting these rules first, as legislative and normative conditions rather than accepting their existence as normative, but informal conventions, will lead to a more robust software development system.
Nonetheless, the suggestion that these rules might govern the development of software projects can only be made under the influence of a fantasy that the Noosphere – the gift culture of hackers – is radically separate from the rest of the “real world.” The only response awaiting such a fantasy is a rude awakening with respect to who exactly the “senior members of this community” are: they are not hackers. The people who get to decide – on anything – are the people who
own
the software; that is, the people who own it in a legal and not in any analogical or metaphorical sense. Right now, the only thing protecting the informal conventions of project management from outside interference is in fact the legal hack of the Free Software licenses. And this hack is effective only so long as the contracts are deemed to be legal and fair—until they are tested in court or in legislatures, and only so long as they are enforced, by whatever means.
Raymond, in fact, knows this, and despite his strategic denial that Open Source software is
not
political, he is also willing to admit that the "right to fork" is like the right to strike or the right to bear arms—rights that constitute the structure within which freedom is possible. Both the Free Software Licenses and the Open Source Definition are intended to ensure the existence of a privatized public domain against the interests of intellectual property-appropriating corporations. This can
only
be political because it concerns the legal constraints on how business is organized and how the US Constitution should be interpreted The effect of using Free Software—regardless of any stated goals—is the political transformation of how business is done and the transformation of the laws which govern commercial activity of software production –
in order to return to software developers the right to make binding decisions about what they create; to take that right away from the patent and copyright owners (e.g. management or shareholders) and give it to the people who
make
and
use
the software—and guarantee them a structural right to maintain this control
. In the end, they are giving each other not software, or value, but
rights.
Rights, in the particular form of contracts, that guarantee
nothing more
than the continued circulation of these rights. | https://lib.convdocs.org/docs/index-269386.html?page=4 | 2022-06-25T10:35:28 | CC-MAIN-2022-27 | 1656103034930.3 | [] | lib.convdocs.org |
SchNetPack documentation¶
SchNetPack is a toolbox for the development and application of deep neural networks to the prediction of potential energy surfaces and other quantum-chemical properties of molecules and materials. It contains basic building blocks of atomistic neural networks, manages their training and provides simple access to common benchmark datasets. This allows for an easy implementation and evaluation of new models.
Contents¶
Getting Started
Tutorials | https://schnetpack.readthedocs.io/en/stable/index.html | 2022-06-25T11:56:19 | CC-MAIN-2022-27 | 1656103034930.3 | [] | schnetpack.readthedocs.io |
Documentary organization Docs In Progress, based in downtown Silver Spring Maryland, invites you to join an evening of inspiration and insights from academy award-nominated documentary filmmakers.
Tuesday, September 29, 2015. 7-10pm. GALA Hispanic Theatre in Purchase Washington DC.
If you are in the Washington DC Metro area or want a great excuse to come down, Docs In Progress will be bringing director Marshall Curry and editor Matthew Hamachek to Washington DC for a special not-to-be-missed evening on Tuesday, September 29 Pills which combines a master class on the director/editor relationship; a screening of Order If a Tree Falls: Story of the Earth Liberation Front which won Best Documentary Editing at the Sundance Film Festival and was nominated for an Academy Award; and a chance to support the emerging documentary filmmakers Docs In Progress supports with its programs and services.
order zerith Pills About Docs In Progress
Docs In Progress gives individuals the tools to tell stories through documentary film to educate, inspire, and transform the way people view their world.
Creating community through documentary film aspiring and experienced documentary filmmakers and the broader community. | https://uniondocs.org/2015-9-29-docs-in-progress/ | 2022-06-25T11:04:34 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['http://www.uniondocs.org/wp-content/uploads/2015/09/news-docs-in-progress-300x113.jpg',
'news docs in progress'], dtype=object) ] | uniondocs.org |
Working with TextBoxes
In Aspose.Words, TextBox class is used to specify how a text is displayed inside a shape. It provides a public property named as parent to get the parent shape for the text box to allow customer to find linked Shape from linked TextBox.
Creating a Link
TextBox class provides is_valid_link_target method in order to check whether the TextBox can be linked to the target Textbox as shown in the code snippet given below:
Check TextBox Sequence
The following code snippets shows how to check if Shape.text_box is a Head, a Tail or a Middle of the sequence:
Breaking a Link
The following code snippet shows how to break a link for a Shape.text_box: | https://docs.aspose.com/words/python-net/working-with-textboxes/ | 2022-06-25T11:30:32 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.aspose.com |
Creating a custom build helper for Conan¶
If Conan doesn’t have a build helper for the build tool you are using, you can create a custom build helper with the Python requires. You can create a package defining the build helper for that build tool and reuse it later in the consumers importing the build helper as a Python requires.
As you probably know, build helpers are wrappers of the build tool that help with the conversion of the Conan settings to the build tool’s ones. They assist users with the compilation of libraries and applications in the build() method of a recipe.
As an example, we are going to create a minimal implementation of a build helper for the Waf build
system . First, we need to create a recipe for the
python_requires that will
export waf_environment.py, where all the implementation of the build helper is.
from conans import ConanFile from waf_environment import WafBuildEnvironment class PythonRequires(ConanFile): name = "waf-build-helper" version = "0.1" exports = "waf_environment.py"
As we said, the build helper is responsible for translating Conan settings to something that the build tool understands. That can be passing arguments through the command line when invoking the tool or creating files that will take as an input. In this case, the build helper for Waf will create one file named waf_toolchain.py that will contain linker and compiler flags based on the Conan settings.
To pass that information to Waf in the file, you have to modify its configuration environment
through the
conf.env variable setting all the relevant flags. We will also define a
configure
and a
build method. Let’s see how the most important parts of waf_environment.py file that
defines the build helper could look. In this case, for simplification, the build helper will only add
flags depending on the conan setting value for the
build_type.
class WafBuildEnvironment(object): def __init__(self, conanfile): self._conanfile = conanfile self._settings = self._conanfile.settings def build_type_flags(self, settings): if "Visual Studio" in self._compiler: if self._build_type == "Debug": return ['/Zi', '/FS'] elif self._build_type == "Release": return ['/O2'] else: if self._build_type == "Debug": return ['-g'] elif self._build_type == "Release": return ['-O3'] def _toolchain_content(self): sections = [] sections.append("def configure(conf):") sections.append(" conf.env.CXXFLAGS = conf.env.CXXFLAGS or []") _build_type_flags = build_type_flags(self._settings) sections.append(" conf.env.CXXFLAGS.extend({})".format(_build_type_flags)) return "\n".join(sections) def _save_toolchain_file(self): filename = "waf_conan_toolchain.py" content = self._toolchain_content() output_path = self._conanfile.build_folder save(os.path.join(output_path, filename), content) def configure(self, args=None): self._save_toolchain_file() args = args or [] command = "waf configure " + " ".join(arg for arg in args) self._conanfile.run(command) def build(self, args=None): args = args or [] command = "waf build " + " ".join(arg for arg in args) self._conanfile.run(command)
Now you can export your custom build helper to the local cache, or upload to a remote:
$ conan export .
After exporting this package to the local cache you can use this custom build helper to compile
our packages using the Waf build system. Just add the necessary configuration files for Waf and
import the
python_requires. The conanfile.py of that package could look similar to this:
from conans import ConanFile class TestWafConan(ConanFile): python_requires = "waf-build-helper/0.1" settings = "os", "compiler", "build_type", "arch" name = "waf-consumer" generators = "Waf" requires = "mylib-waf/1.0" build_requires = "WafGen/0.1", "waf/2.0.19" exports_sources = "wscript", "main.cpp" def build(self): waf = self.python_requires["waf-build-helper"].module.WafBuildEnvironment(self) waf.configure() waf.build()
As you can see in the conanfile.py we also are requiring the build tool and a generator for that build tool. If you want more detailed information on how to integrate your own build system in Conan, please check this blog-post about that topic. | https://docs.conan.io/en/1.32/extending/custom_build_helper.html | 2022-06-25T11:33:24 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.conan.io |
Syntax
ntinfo -c
ntinfo -l [ -n <number> ] [ -r]
ntinfo -x -i <number> [ -a <number> ] [ -e <time> ] [ -n <number> ] [ -o <name> ] [ -t <name> ]
ntinfo -i <number> [ -e <time> ] [ -n <number> ] [ -o <name> ]
Tool Options
Introduction
The tool serves two purposes. The first is to collect system information, primarily about the utilization of the host buffer system. The second is to export the collected information as comma-separated output, or as a textual report.
The tool operates in one of four modes: collect, list, export, or report mode. The mode depends on the options used, see below:
OPTION | MODE -------------------+------------ -c or --collect | collect -l or --list | list -x or --export | export otherwise | report -------------------+------------
Operation Modes
Collect Mode
The tool assigns a unique identifier, called the collection identifier or just collection ID, to each collection. When starting in collect mode, the tool prints the current identifier in the terminal window. You can use the list mode to list available collections, including any ongoing collection. You must use the
-i/--id option to specify the collection identifier when you want to create a report for a collection, or export its data.
When in collect mode the tool uses an exclusive lock to ensure that only one instance of the tool collects information. When attempting to run multiple instances of ntinfo, the tool hangs until it can get the collect lock. Although only a single instance may collect information, it is possible to use multiple instances of ntinto in the list, export, or report mode concurrently, and while collecting of information is active.
The tool ensures that the file used to store the collected information does not exceed a configurable limit called the quota. The default quota is one gigabytes, and the maximum quota is ten gigabytes. Upon reaching the limit, the tool deletes old values, starting with the oldest values. It is not necessary to delete information manually.
Collected data can be exported as comma-separated output or used be used for textual reports.
When started in collect mode, the tool runs until it is interrupted with ctrl-c, or the tool fails to communicate with ntservice.
List Mode
List mode provides a list of the available collections. Each collection is identified by a unique number called the collection identifier.
The
-i/--id option is used to designate the collection from which the tool uses data in export and report mode.
The
-r/--reverse option reverses the order of items in list and export mode.
Export Mode
When in export mode the tool prints the collected information as comma-separated output.
The
-e/--endtime option specifies the time to use as the latest time stamp for a collection. When in export mode, the tool shows the latest information first and works its way backwards (in time), unless the -r/--reverse option is used.
The
-n/--number option limits the number of data items, which is useful when there are many data entries in a collection.
The
-o/--output option works in report and export mode. The option denotes a file name to which the tool writes its output.
When using the
-a/--aggregate option in export mode, the tool supports aggregation over time, which means the tool combines multiple sampled values into three values: the minimum, maximum, and average value for all of the sampled values during the aggregation period. The aggregation value is the length of the aggregation period, expressed in seconds. It is likely that a user may specify an aggregation value of 60 (sixty) to aggregate over minutes, or perhaps 300 to aggregate over five-minutes periods, or perhaps even 1800 for half-hour periods; but any positive integer is valid.
The tool collects information once per second, which means it produces a complete set of sampled values each second. However, in terms of export, a user may decide that the frequent sampling results in too much information; or perhaps the data is too fine-grained, which is where aggregation may come handy. For example, it may be useful to limit the amount of exported data by grouping (aggregating) the data into periods of, say, 60 (sixty) seconds, because it is sufficient to know the minimum, maximum, and average values of each of the 60 sampled values (for each type of information in the set of sampled values).
In summary, aggregation serves two purposes: (1) limit the amount of data and (2) automatically calculate the minimum, maximum, and average of the samples values.
Note that aggregation only works for number values; the tool ignores textual values when aggregating.
The
-t/--template option filters the data items in export mode. The tool provides a brief overview of the available templates by specifying a template named help:
$ /opt/napatech3/bin/ntinfo -x -i 1 -t help Error: Template 'help' specified with -t/--template option is unknown. The following templates are available: hb_util (default) Utilization in percent and full counter. hb_util_pct Utilization in percent. hb_util_cnt Full counter. hb_util_all All host buffer utilization information. hb_map Mapping of host buffer number to adapter number, feed number, and type (rx/tx). ai Adapter information. si System information. all All information.
Report Mode
The tool provides a host buffer utilization report. This report gives an overview of the utilization of the RX host buffers in the system during the past minutes, ten-minutes periods, hours, days, and weeks.
When in report mode, the tool shows the latest information first and works its way backwards (in time).
Output
When exporting data from ntinfo, e.g.,
ntinfo -i 1 -x, each line contains seven values (seven fields), each delimited by comma. Each field/value is enclosed in double quotes.
These are the fields used for exports:
- utc_time_us
- local_time
- category
- name
- numerator
- val_type
- value
The "utc_time_us" field is the UTC time stamp including microseconds. Thus to get the number of seconds and microseconds parts individually:
ts / 1000000 # ts divided by one million
ts % 1000000 # ts modulus one million
The "local_time" field is a text string that specifies the time stamp in human readable formation, and in the local time zone, e.g.,
"2016/11/24-09:20:12.465736" # Useful when importing into a spreadsheet
The "category" field denotes the logical group that an metric belongs to. Ntinfo provides the following groups:
- ai: adapter information
- si: system information
- hb_map: host buffer mapping
- hb_util: host buffer utilization
A category contains multiple items, each identified by a name, see next paragraph.
The "name" field denotes an item in a category. Ntinfo provides the following names (listed together with the category to which they belong):
The "numerator" field denotes a number of something, for instance the number of a host buffer, or an adapter number. A numerator may be empty, i.e., "".
For instance, the following denotes the number of host buffers for adapter one:
"ai", "num_hbs", "1" # Numerator = 1
The "val_type" field denotes the value type of the sampled value. Ntinfo uses the following two value types: r, # real number, e.g., "0", "3.1415927" t, # text, e.g., "2.11.3.21", "TrafficGen" The "value" field denotes the sampled value.
Note: When aggregating during export (ntinfo -x -a <seconds>), ntinfo provides three value fields for numeric items (not text items), namely: "min_value", "max_value", "avg_value" which are the minimum, maximum, and average values for the aggregation period.
Note: Ntinfo omits text types when aggregating.
Examples
ntinfo -c
Begin collecting of system information.
ntinfo -l -n 5
List the five latest collections.
ntinfo -l -n 5 -r
List the five oldest collections.
ntinfo -x -i 1234 -t all -o /tmp/export.csv
Export all information for collection 1234 to the file /tmp/report.txt.
ntinfo -x -i 5678 -t hb_util_all -n 9876 -o /tmp/export.csv
Export all host buffer information for collection 5678 to the file /tmp/export.csv; limit the number of data items to 9876.
ntinfo -i 1234
Show the RX host buffer utilization report for collection 1234. | https://docs.napatech.com/r/Reference-Documentation/ntinfo | 2022-06-25T11:20:45 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.napatech.com |
Adding and removing legend
- Legend is on by default. It appears on the right side of the visuals. To add a legend below or above a visual, navigate to the Marks menu, and select the appropriate Legend Style.
- To remove the legend, select .
Here is an example of a line chart with a legend showing. The legend represents dimensions as distinct colors.
For graphs that map a continuum of values, such as choropleth maps, the legend shows a sliding scale of color and values at the minimum, median, and maximum.
| https://docs.cloudera.com/data-visualization/7/howto-customize-visuals/topics/viz-change-legend-style.html | 2022-06-25T11:16:09 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['../images/viz-legend-2.jpg', None], dtype=object)
array(['../images/viz-legend-4.jpg', None], dtype=object)] | docs.cloudera.com |
For production
Installation
In addition to installing lsFusion, these installers/scripts also install OpenJDK, PostgreSQL, and Tomcat. Tomcat is embedded into the lsFusion Client installation, and OpenJDK and PostgreSQL are installed separately (in particular, in separate folders).
- Windows
- Linux
Executable exe files: lsFusion 4.1 Server & Client (+ OpenJDK 11.0.9, PostgreSQL 13.1(x64) / 10.8(x32), Tomcat 9.0.21):):
After Installation
Ports
After the installation is completed, the following will by default be locally installed on the computer and launched as services:
- DB server (PostgreSQL) on port
5432
- application server (Server) on port
7652
- web server (Client) on port
8080
Installing / updating an application.
info.
- Windows
- Linux
$APP_DIR$ is equal to
$INSTALL_DIR$/lib
)
info.
caution
All paths and commands are given below for the major version 4 of the platform (for other versions just replace 4 with the required number, for example
lsfusion4-server →
lsfusion11-server)
- Windows
- Linux
All paths by default
Paths changed (in particular with symlinks) in accordance with Linux ideology
Updating
Programs installed separately (OpenJDK, PostgreSQL) are also updated separately (for more details about this process, see the documentation for these programs)
- Windows
- Linux
Platform components are also updated separately from each other. To do this, you must download the file of the new version of the component from the central server and replace the following file with it:
Custom installation
If any of the programs listed in the installation (platform components) do not need to be installed / are already installed on your computer:
- Windows
- Linux
These programs can be excluded during installation using the corresponding graphical interface.
The following are scripts for installing specific platform components:
Database Server - PostgreSQL 11:
Application Server - lsFusion 4 Server (+ OpenJDK 1.8):
Web server - lsFusion 4 Client (+ Tomcat 9.0.20):
When installing platform components on different computers, it is also necessary to configure the parameters to connect them to each other When installing platform components on different computers, it is also necessary to configure the parameters to connect them to each other
info
When installing under Windows, the above parameters are requested during the installation process and the parameter files are configured automatically.
Manual setup (file paths, service names)
Startup parameters
- Windows
- Linux
Restart
Any changes made to the startup parameters, as well as changes to lsFusion modules, require a server restart (when changing lsFusion modules only the application server (Server)). This can be done with:
- Windows
- Linux
Application server (Server)
Control Panel > Admin > Services > lsFusion 4 Server
# Stop server
$INSTALL_DIR/Server/bin/lsfusion4_server.exe //SS//lsfusion4_server
# Start server
$INSTALL_DIR/Server/bin/lsfusion4_server.exe //ES//lsfusion4_server
Web server (Client)
Control Panel > Admin > Services > lsFusion 4 Client
# Stop server
$INSTALL_DIR/Client/bin/lsfusion4_client.exe //SS//lsfusion4_client
# Start server
$INSTALL_DIR/Client/bin/lsfusion4_client.exe //ES//lsfusion4_client
Logs
Platform logs are written to the following folders:
- Windows
- Linux
The main logs (including the process of stopping and starting the server) are located in:
- Application server (Server) -
stdout
- Web server (Client) -
catalina.out(since the web server runs on Tomcat).
Locale
The locale used by the platform is determined based on the locale installed in the operating system. If necessary, it can be changed with:
- Windows
- Linux
Control Panel > Language and Regional Standards
localectl set-locale LANG=ru_RU.utf8 | https://docs.lsfusion.org/next/Execution_auto/ | 2022-06-25T10:35:10 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.lsfusion.org |
xdmp.urlEncode( plaintext as String, [noSpacePlus as Boolean?] ) as String
Converts plaintext into URL-encoded string. To decode the string, use xdmp.urlDecode.
There is also a W3C function that does a slightly different url encoding: fn:encode-for-uri.
xdmp.urlEncode("Why not?") => "Why+not%3f"
Stack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question. | https://docs.marklogic.com/9.0/xdmp.urlEncode | 2022-06-25T11:41:35 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.marklogic.com |
14.1 Problem Analyzer¶
The problem analyzer prints a survey of the structure of the problem, with information about linear constraints and objective, quadratic constraints, conic constraints and variables.
In the initial stages of model formulation the problem analyzer may be used as a quick way of verifying that the model has been built or imported correctly. In later stages it can help revealing special structures within the model that may be used to tune the optimizer’s performance or to identify the causes of numerical difficulties.
The problem analyzer is run using the
mosekopt
(’anapro’) command and produces output similar to the following (this is the problem analyzer’s survey of the
aflow30a problem from the MIPLIB 2003 collection).
Analyzing the problem *** Structural report Dimensions Constraints Variables Matrix var. Cones 479 842 0 0 Constraint and bound types Free Lower Upper Ranged Fixed Constraints: 0 0 421 0 58 Variables: 0 0 0 842 0 Integer constraint types Binary General 421 0 *** Data report Nonzeros Min Max |cj|: 421 1.1e+01 5.0e+02 |Aij|: 2091 1.0e+00 1.0e+02 # finite Min Max |blci|: 58 1.0e+00 1.0e+01 |buci|: 479 0.0e+00 1.0e+01 |blxj|: 842 0.0e+00 0.0e+00 |buxj|: 842 1.0e+00 1.0e+02 *** Done analyzing the problem
The survey is divided into a structural and numerical report. The content should be self-explanatory. | https://docs.mosek.com/latest/toolbox/analyzers.html | 2022-06-25T10:40:47 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.mosek.com |
OSocketStream
- class OSocketStream
A base class for ostreams that write to a (possibly non-blocking) socket. It adds
is_closed(), which can be called after any write operation fails to check whether the socket has been closed, or whether more data may be sent later.
Inheritance diagram
- bool flush(void)
Sends the most recently queued data now. This only has meaning if
set_collect_tcp()has been set to true. | https://docs.panda3d.org/1.10/cpp/reference/panda3d.core.OSocketStream | 2022-06-25T10:41:19 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.panda3d.org |
BusyIndicator for Xamarin Mobile Blazor Bindings
Telerik BusyIndicator for Xamarin Mobile Blazor Bindings allows you to display a notification whenever a longer-running process is being handled by the application. This makes the UI more informative and the user experience smoother.
The BusyIndicator is part of Telerik UI for Xamarin, a
professional grade UI component library for building modern and feature-rich applications. To try it out sign up for a free 30-day trial.
The BusyIndicator is part of Telerik UI for Xamarin, a professional grade UI component library for building modern and feature-rich applications. To try it out sign up for a free 30-day trial.
Figure 1: RadBusyIndicator Overview
Key features
Built-in Animations
The busy indicator component provides few built-in animations which you can use. RadBusyIndicator provides a set of built-in animations which you can use. They can be changed via the
AnimationType property.
The property is an enum called AnimationType and it accepts values named
Animation1 to
Animation10.
Animation1,
Animation2,
Animation3, etc. to
Animation10.
Animation1 is the default one.
- Changing animation size and color: You can set the size of the animation content, which is the animated element. This can be done via the
AnimationContentWidthRequestand
AnimationContentHeightRequestproperties. By default the size of the default animation content is 25x25 pixels.
You can also change the color of the animation with the AnimationContentColor property.
The snippet below shows how you can configure the predefined animations of RadBusyIndicator:
<RadBusyIndicator AnimationContentHeightRequest="100" AnimationContentWidthRequest="100" AnimationType="Telerik.XamarinForms.Primitives.AnimationType.Animation2" AnimationContentColor="Color.Blue" IsBusy="true"/>
and the result:
BusyContent
Content which is displayed when the
IsBusy it
false:
<RadBusyIndicator AnimationContentHeightRequest="100" AnimationContentWidthRequest="100" IsBusy="false"> :
Custom Busy Content
Setting
BusyContent property of RadBusyIndicator allows you to display any content together with the built-in animations while the control is in Busy state.
<RadBusyIndicator AnimationContentHeightRequest="100" AnimationContentWidthRequest="100" AnimationType="Telerik.XamarinForms.Primitives.AnimationType.Animation6" IsBusy="true"> <BusyContent> <Telerik.XamarinForms.Blazor.Primitives.BusyIndicator.BusyContent> <Label Text="This is the content of the RadBusyIndicator control displayed when the indicator is not busy." /> </Telerik.XamarinForms.Blazor.Primitives.BusyIndicator.BusyContent> </BusyContent> </RadBusyIndicator>
and the result:
| https://docs.telerik.com/devtools/xamarin/blazormobilecontrols/busyindicator/busyindicator-blazor-overview | 2022-06-25T11:29:29 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['images/busyindicator-overview.png', 'BusyIndicator example'],
dtype=object)
array(['images/busyindicator-features-animations-0.png',
'BusyIndicator animations'], dtype=object)
array(['images/busyindicator-animations-settings.png',
'BusyIndicator animations'], dtype=object)
array(['images/busyindicator-content.png', 'BusyIndicator content'],
dtype=object)
array(['images/busyindicator-custombusycontent.png',
'BusyIndicator Custom Busy Content'], dtype=object)] | docs.telerik.com |
Your LifeKeeper configuration must meet the following requirements prior to the installation of LifeKeeper for Linux PostgreSQL best practice is for a LifeKeeper cluster to have at least two communication paths. Two separate LAN-based communication paths using dual independent sub-nets are recommended for heartbeats, and at least one of these should be configured as a private network. Using a combination of TCP and TTY heartbeats is also supported.
Software Requirements
- TCP/IP Software – Each server in your LifeKeeper configuration requires TCP/IP Software.
-
フィードバック
フィードバックありがとうございました
このトピックへフィードバック | http://docs.us.sios.com/spslinux/9.3.2/ja/topic/postgresql-hardware-and-software-requirements | 2020-03-29T00:58:49 | CC-MAIN-2020-16 | 1585370493121.36 | [] | docs.us.sios.com |
. Requirements. and store artifacts when you receive them, even though they may not be immediately needed.
The following table shows an example directory structure for the security artifacts created in the steps in this and subsequent sections. Use different names if you like,
If you do not see this response, double-check all your steps up to this point: are you working in the correct path? Do you have the proper certificate? and so on. See Getting Support for information about how to contact Cloudera Support and to find out about other sources of help if you cannot successfully import the certificates. | https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/how_to_obtain_server_certs_tls.html | 2020-03-29T00:14:01 | CC-MAIN-2020-16 | 1585370493121.36 | [] | docs.cloudera.com |
Eseutil /P Repair Mode
The Eseutil repair mode corrects database problems at the page and Extensible Storage Engine (ESE) table levels but not at the application level. After repairing a database using Eseutil, ISInteg should be run to repair the database at an application level. To understand what database page level, ESE table levels, and application levels mean, see Database Recovery Strategies. For more information about syntax and instructions for using Eseutil /P, see How to Run Eseutil /P (Repair) in Different Scenarios.
During repair, it may be necessary to discard rows from tables or even entire tables. After completing the ESE-level repairs, it is necessary to perform an application-level repair to correct problems that may now exist at the application level because of missing data. The Information Store Integrity (ISInteg) utility can be used to perform this Exchange application-level analysis and repair. The following example explains how Eseutil repair works.
For example, a table in the database stores messages for all mailboxes. A separate table is used for each user's Inbox folder. Suppose that a message is lost when using Eseutil to repair the message table. Eseutil will not correlate the message with the reference to it in each Inbox folder, because Eseutil does not understand the cross-table schema of the application. ISInteg is needed to compare the repaired message table with each Inbox folder, and to remove a lost message from the Inbox.
In short, Eseutil looks at each Exchange database page and table and ensures consistency and integrity within each table. ISInteg, recommended to be run after Eseutil, repairs a database at the application level and ensures the integrity of the relationships between tables.
Repairing databases involves the following three stages, in this order:
Eseutil is run in /P mode to perform a database page-level and table-level repair
Eseutil is run in /D mode to fully rebuild indexes and defragment the database
ISInteg is then run to repair the database at the application level
Note
A successful repair does not necessarily mean that a database will always be useable. The loss of system metadata may leave a database unmountable or empty. When a database is not repairable, you can restore data from backup or create a new database.
Placing a Repaired Database Back Into Production generate detailed repair log files that list the errors found and corrected. For more information about causes and consequences of specific errors, you can search the Microsoft Knowledge Base and see the topic on Reference for Common Eseutil Errors. Information from these areas can help you decide whether you wish to accept the risks of leaving a repaired database in production.
Eseutil /P Best Practice
Use Eseutil /P when you cannot restore a database from backup or when you cannot fully roll transaction logs forward.
Note
If you cannot roll forward transaction log files, a hybrid strategy is best to follow. You can restore a working version of the database from backup, repair the damaged database in the recovery storage group, and merge the two databases.
Microsoft recommends that you follow these best practices when repairing a database:
Do not allow a repaired database to remain in production for an extended period of time.
Do not use the Eseutil repair option when backup is available.
Do not use Eseutil repair mode to expunge a -1018 error. For more information about error -1018, see Microsoft Knowledge Base article 812531, "Support WebCast: Microsoft Exchange: Understanding and Resolving Error -1018" ().
Previous Exchange Versions
The table below explains how the Eseutil repair mode works in different versions of Exchange:
For More Information
For more information, see the following topics in the Exchange Server Database Utility Guide: | https://docs.microsoft.com/en-us/previous-versions/tn-archive/aa996773(v=exchg.65)?redirectedfrom=MSDN | 2020-03-29T01:04:31 | CC-MAIN-2020-16 | 1585370493121.36 | [] | docs.microsoft.com |
Production
Below is a series of questions and answers around production setups when running Moov services. The hosted Moov service will have different answers than self-hosted options.
Operating System¶
Moov offers two foramts for running services in production: Docker images or compiled binaries for Linux, macOS, and Windows. The supported versions of each OS are what is supported by Docker or Go and Moov makes no attempts to support older operating systems.
Database Backups¶
Backup and restore of database contents is a critical component of production deployments. This is how business operations continue after major system failure.
There are several secure and production-grade backup solutions such as Restic or Tarsnap if you are going to manage your own backups.
SQLite¶
SQLite is a file-based database and by default Moov services don't require auth to access the file. Instead we rely on machine-level restrictions to limit access to the database file and write-ahead log.
Typically backing up a database file would be a shell command followed by copying/encrypting that file to an external data store:
$ sqlite3 paygate.db .backup paygate_backup.sql
More Details:
MySQL¶
MySQL is a network-based database which requires username/password or certificate authentication to connect. The backup process for this database involves a
mysqldump command followed by copying that file to an external data store. | https://docs.moov.io/faq/production/ | 2020-03-29T00:41:33 | CC-MAIN-2020-16 | 1585370493121.36 | [] | docs.moov.io |
Please follow the below steps:
- Login your Ryviu dashboard > Settings > Review Widget.
- Turn on option Approving - Verify customer review before publish (green color).
- Save changes
Once customers write reviews on your store, it will be added to disable reviews. Ryviu will send you the notification via your email and Ryviu dashboard. Click on Review status (green color) to enable it on your store.
It is easy to do. If you need some more help, please feel free to contact us via the live chat widget.
Related articles: Setting section | https://docs.ryviu.com/en/articles/3355096-how-to-approve-reviews-before-publishing-on-your-site | 2020-03-29T00:33:43 | CC-MAIN-2020-16 | 1585370493121.36 | [array(['https://downloads.intercomcdn.com/i/o/150144214/fc851324c647569877029c9a/app.png',
None], dtype=object) ] | docs.ryviu.com |
- There are four possible ways to begin.
- Select the Child Resource Tag from the drop down box. This should be the tag name of the child in the dependency that you want to delete. Click Next to proceed to the next dialog box.
- The dialog then confirms that you have selected the appropriate parent and child resource tags for your dependency deletion. Click Delete | http://docs.us.sios.com/spslinux/9.3.2/ja/topic/deleting-a-resource-dependency | 2020-03-29T00:39:31 | CC-MAIN-2020-16 | 1585370493121.36 | [] | docs.us.sios.com |
Managing Multiple Kafka Versions
Kafka Feature Support in Cloudera Manager and CDH
Using the latest features of Kafka sometimes requires a newer version of Cloudera Manager and/or CDH that supports the feature. For the purpose of understanding upgrades, the table below includes several earlier versions where Kafka was known as CDK Powered by Apache Kafka (CDK in the table).
Client/Broker Compatibility Across Kafka Versions
Maintaining compatibility across different Kafka clients and brokers is a common issue. Mismatches among client and broker versions can occur as part of any of the following scenarios:
- Upgrading your Kafka cluster without upgrading your Kafka clients.
- Using a third-party application that produces to or consumes from your Kafka cluster.
- Having a client program communicating with two or more Kafka clusters (such as consuming from one cluster and producing to a different cluster).
- Using Flume or Spark as a Kafka consumer.
In these cases, it is important to understand client/broker compatibility across Kafka versions. Here are general rules that apply:
- Newer Kafka brokers can talk to older Kafka clients. The reverse is not true: older Kafka brokers cannot talk to newer Kafka clients.
- Changes in either the major part or the minor part of the upstream version major.minor determines whether the client and broker are compatible. Differences among the maintenance versions are not considered when determining compatibility.
As a result, the general pattern for upgrading Kafka from version A to version B is:
- Change Kafka server.properties to refer to version A.
- Upgrade the brokers to version B and restart the cluster.
- Upgrade the clients to version B.
- After all the clients are upgraded, change the properties from the first step to version B and restart the cluster.
Upgrading your Kafka Cluster
At some point, you will want to upgrade your Kafka cluster. Or in some cases, when you upgrade to a newer CDH distribution, then Kafka will be upgraded along with the full CDH upgrade. In either case, you may wish to read the sections below before upgrading.
General Upgrade Information
Previously, Cloudera distributed Kafka as a parcel (CDK) separate from CDH. Installation of the separate Kafka required Cloudera Manager 5.4 or higher. For a list of available parcels and packages, see CDK Powered By Apache Kafka® Version and Packaging Information Kafka bundled along with CDH.
As of CDH 6, Cloudera distributes Kafka bundled along with CDH. Kafka is in the parcel itself. Installation requires Cloudera Manager 6.0 or higher. For installation instructions for Kafka using Cloudera Manager, see Cloudera Installation Guide.
Cloudera recommends that you deploy Kafka on dedicated hosts that are not used for other cluster roles.
Upgrading Kafka from CDH 6.0.0 to other CDH 6 versions
To ensure there is no downtime during an upgrade, these instructions describe performing a rolling upgrade.
Before upgrading, ensure that you set inter.broker.protocol.version and log.message.format.version to the current Kafka version (see Kafka and CM/CDH Supported Versions Matrix), and then unset them after the upgrade. This is a good practice because the newer broker versions can write log entries that the older brokers cannot read. If you need to rollback to the older version, and you have not set inter.broker.protocol.version and log.message.format.version, data loss can occur.
From Cloudera Manager on the cluster to upgrade:
- Explicitly set the Kafka protocol version to match what's being used currently among the brokers and clients.
Update server.properties on all brokers as follows:
- Choose the Kafka service.
- Click Configuration.
- Use the Search field to find the Kafka Broker Advanced Configuration Snippet (Safety Valve) configuration property.
- Add the following properties to the safety valve:
inter.broker.protocol.version = current_Kafka_version log.message.format.version = current_Kafka_version
Make sure you enter full Kafka version numbers with three values, such as 0.10.0. Otherwise, you'll see an error similar to the following:
2018-06. The information is automatically copied to each broker.
- Upgrade CDH. See Upgrading the CDH Cluster.
Do not restart the Kafka service, select Activate Only and click OK.
- Perform a rolling restart.
Select Rolling Restart or Restart based on the downtime that you can afford.
At this point, the brokers are running in compatibility mode with older clients. It can run in this mode indefinitely. If you do upgrade clients, after all clients are upgraded, remove the Safety Valve properties and restart the cluster.
Upstream Upgrade Instructions
The table below links to the upstream Apache Kafka documentation for detailed upgrade instructions. It is recommended that you read the instructions for your specific upgrade to identify any additional steps that apply to your specific upgrade path. | https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/kafka_multiple_versions.html | 2020-03-29T00:13:41 | CC-MAIN-2020-16 | 1585370493121.36 | [] | docs.cloudera.com |
Forum Ask a question Forum › Author "matias-milia"Filter:QuestionsSubscribesSort byViewsAnswersVotesTroubles with Map Explorer while accesing results from Network MappingOpenmatias.milia asked 10 months ago175 views0 answers0 votesHow to select specific text whitin a corpus.Openmatias.milia asked 10 months ago • Questions156 views0 answers0 votesDistant readingResolvedmatias.milia asked 10 months ago173 views0 answers0 votesTrouble parsing a PDF corpusOpenmatias.milia asked 1 year ago • Questions221 views0 answers0 votesHistorical network mapping: cluster info and different outputs.Answeredmatias.milia asked 1 year ago292 views1 answers0 votesNot being able to query a new corpus.Openmatias.milia asked 1 year ago • Error356 views0 answers0 votesStructural Analysis for a huge database (265k entries)Openmatias.milia asked 1 year ago238 views0 answers0 votesCould not access to CorText ManagerClosedmatias.milia asked 1 year ago • Error243 views0 answers0 votes12Next » Ask a question | https://docs.cortext.net/forum/?user=matias-milia | 2020-03-28T23:54:53 | CC-MAIN-2020-16 | 1585370493121.36 | [] | docs.cortext.net |
General Ledger settings - 'Do not allow manual entry' and 'Interrupt in case of error account' detailed explanations
We have been answering a lot of questions lately about what the setting ‘Do not allow manual entry’ really is and how it reacts in different posting scenarios.
The setting applies only when manually entering the account value using a ledger account type in the general journal. So if you have posting profiles set up that would post to this account that would be allowed.
However, depending on what is set in the General ledger parameters, you may experience different behavior when this account is used. See the below screenshot for the other parameter that will affect the behavior, “Interrupt in case of error account”.
The following table lays out the posting behavior when using “Interrupt in case of error account” and “Do not allow manual entry”.
We hope this helps with setting up your posting and accounting policies and understanding how the settings on the main account and GL parameters affects your daily processes. | https://docs.microsoft.com/en-us/archive/blogs/dynamicsaxse/general-ledger-settings-do-not-allow-manual-entry-and-interrupt-in-case-of-error-account-detailed-explanations | 2020-03-29T00:59:49 | CC-MAIN-2020-16 | 1585370493121.36 | [] | docs.microsoft.com |
.
Enable correlation searches
Enable correlation searches to start running adaptive response actions and receiving notable events. Splunk Enterprise Security installs with all correlation searches disabled so that you can choose the searches that are most relevant to your security use cases.
- From the Splunk ES menu bar, select Configure > Content Management.
- Filter the Content Management page by a Type of Correlation Search to view only correlation searches.
- Review the names and descriptions of the correlation searches to determine which ones to enable to support your security use cases.
For example, if compromised accounts are a concern, consider enabling the Concurrent Login Attempts Detected and Brute Force Access Behavior Detected correlation searches.
- In the Actions column, click Enable to enable the searches that you want to enable.
After you enable correlation searches, dashboards start to display notable events, risk scores, and other data.
Change correlation search scheduling
Change the default search type of a correlation search from real-time to scheduled. Splunk Enterprise Security uses indexed real-time searches by default.
- From the Content Management page, locate the correlation search you want to change.
- In the Actions column, click Change to scheduled..
For information on search schedule priority, see the Splunk platform documentation.
-.
Throttle the number of response actions generated by a correlation search
Set up throttling to limit the number of response actions generated by a correlation search. When a correlation search matches an event, it triggers a response action.
By default, every result returned by the correlation search generates a response action. Typically, you may only want one alert of a certain type. You can use throttling to prevent a correlation search from creating more than one alert within a set period. To change the types of results that generate a response action, define trigger conditions. Some response actions allow you to specify a maximum number of results in addition to throttling. See Set up adaptive response actions in Splunk Enterprise Security.
- Select Configure > Content Management.
- Click the title of the correlation search you want to edit.
- Type a Window duration. During this window, any additional event that matches any of the Fields to group by will not create a new alert. After the window ends, the next matching event will create a new alert and apply the throttle conditions again.
- Type the Fields to group by to specify which fields to use when matching similar events. If a field listed here matches a generated alert, the correlation search will not create a new alert. You can define multiple fields. Available fields depend on the search fields that the correlation search returns.
- Save the correlation search.
Throttling applies to any type of correlation search response action and occurs before notable event suppression. See Create and manage notable event suppressions for more on notable event suppression.! | https://docs.splunk.com/Documentation/ES/4.7.1/Admin/Configurecorrelationsearches | 2020-03-29T00:52:33 | CC-MAIN-2020-16 | 1585370493121.36 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Creating a new decoupled CMS Website¶
Introduction¶
This article goes through the process of creating a fully functional decoupled CMS website that lets you edit Blog Posts and render them.
Decoupled is a development model where the front-end and the back-end (administration) of the site are hosted in the same web application. Developers can then write their own ASP.NET Razor Pages or Controllers to have total control on what is generated by the website.
Prerequisites¶
You should:
- Be able to create a new ASP.NET Core project
- Be familiar with C# and HTML
- Have the .NET SDK installed
- Have Visual Studio .NET or Visual Studio Code
Setting up the project¶
Creating the Orchard Core CMS Web Application¶
Option 1 - From Visual Studio .NET¶
Follow this option if you want to use Visual Studio .NET.
- Open Visual Studio .NET.
- Create a new ASP.NET Core Web Application project.
- Enter a Project name and a Location. For this tutorial we'll use "OrchardSite" for the name. Then click Create.
- Select the Web Application template and click Create.
Option 2 - From the command line¶
From the folder where the project
- Type
dotnet new webapp -o OrchardSitewhere "OrchardSite" is the name of the project to create.
This creates a Web application using Razor Pages.
Testing the Website¶
- Start the project.
The newly created website should be able to run, and look like this:
Adding Orchard Core CMS to the Website¶
- Double-click or edit the .csproj file
- Modify the
<PropertyGroup>section like this:
<PropertyGroup> <TargetFramework>netcoreapp3.1</TargetFramework> <PreserveCompilationReferences>true</PreserveCompilationReferences> </PropertyGroup>
This will allow for the Razor Pages to be reloaded without the need to recompile them.
- Modify the
<ItemGroup>section like this:
<ItemGroup> <PackageReference Include="OrchardCore.Application.Cms.Targets" Version="1.0.0-rc1-10004" /> </ItemGroup>
Nightly builds
If you are using the nightly builds of Orchard Core (MyGet feed) then you should use the package
OrchardCore.Application.Cms.Core.Targets instead.
This will add the packages from Orchard Core CMS
- Edit the
Startup.csfile
ConfigureServicesmethod like this:
public void ConfigureServices(IServiceCollection services) { services.AddOrchardCms(); }
Razor Pages
AddRazorPages must not be called directly as
services.AddOrchardCms() already invokes it internally.
- Edit the
Startup.csfile
Configure
- Remove everything after
app.UseStaticFiles();and replace it by
app.UseOrchardCore();like this:
... app.UseHttpsRedirection(); app.UseStaticFiles(); app.UseOrchardCore(); }
Start the application, the Setup screen shows up:
Setting up a new site¶
The Setup screen expects some information in order to create a new database to store the content and user account.
- Enter a name for your site. In this example we'll use "My Site".
- In the Recipe drop-down, select Blank site which can be used for decoupled and headless modes.
- Select a time zone if the one detected is not correct. All date and times will be entered or rendered relatively to this time zone by default.
- Choose a database server. The easiest way to begin is by selecting Sqlite as it won't require any other step from you.
- In the Super User section, enter some accounts information or your choice. In this example we'll use
adminas the user name.
- Click on Finish Setup.
After a couple seconds the same site as the original template should be displayed, with a "Welcome" message.
If you chose Sqlite, all the state of the application is now stored in a folder named
App_Data inside your project's root folder.
If something went wrong, try deleting the
App_Datafolder if it exists and go through this section again.
Creating Blog Posts¶
This part covers the basic content management concepts of Orchard Core CMS, like Content Types and Content Items.
Content Modeling¶
In Orchard Core CMS most of the content that is managed is called a Content Item. A content item is a versioned document like a page, an article, a blog post, a news item, or anything you need to edit. Each of these documents are based on a Content Type that defines which properties it is made of. For instance any article will have a title and some text. A blog post might also have tags. Orchard Core CMS lets you model the content types the way you want, which is known as content modeling.
For developers
A Content Type is analogous to a class, where a Content Item can be seen as an instance of a Content Type.
Creating a Blog Post content type¶
Orchard comes pre-configured with a set of composable elements of data management called Content Parts that can be used to create custom types like a LEGO. A Title Part for instance will provide a nice editor to enter the title of a content item, and also set it to the text to display by default in the screens. Another important content part is the Markdown Body Part which provides a way to store and render Markdown as the main text of a content item. This is also useful for a Blog Post.
For developers
A Content Part is analogous to a partial class, where each Content Parts are then aggregated to define a Content Type. Content Fields are analogous to custom properties that are added to the Content Type.
Let's create a new content type named
Blog Post and add some necessary content parts to it:
- From the running website, open the url
/admin.
- In the login screen, enter the user credentials that were used during the setup.
- You are presented with the administrative side of the site.
- In the left menu, select Content Definition then Content Types.
- Click on Create new type in the top right corner
- In Display Name enter
Blog Post. The Technical Name will be generated automatically with the value
BlogPost, like this:
- Click Create
- A list of Content Parts is presented. Select Title and Markdown Body, then click on Save
- In the following screen, scroll to the bottom of the page and re-order the Parts like this:
- Then click Save
You can notice an Edit button in front of each content part. This lets us define some settings that might be available for each of them, only for this type.
- On the
MarkdownBodypart, click Edit.
Wysiwyg editoras the type of editor to use, then click Save:
The Blog Post content type is ready to use.
Creating blog posts¶
- In the left menu, select New then click on Blog Post to reveal an editor for the newly created
BlogPostcontent type.
- Fill in the Title and the MarkdownBody form elements with some content, then click on Publish. For the sake of this example we'll use
This is a new dayand some Lorem Ipsum text.
- In the menu, click on Content > Content Items to display all the available content items.
This shows that we now have a new blog post content item named
This is a new day. As we create more content items these will appear on this page.
Rendering content on the website¶
The next step is to create a custom Razor Page that will display any blog post with a custom url.
Creating a custom Razor Page¶
- In the editor, in the
Pagesfolder, create a new file named
BlogPost.cshtmlwith the following content:
@page "/blogpost/{id}" <h1>This is the blog post: @Id</h1> @functions { [FromRoute] public string Id { get; set; } }
- Open the url
/blogpost/1to display the previous page.
Accessing route values
In the route, url segment named
{id} is automatically assigned to the
Id property that is rendered with the
@Id syntax.
Each content item in Orchard Core has a unique and immutable Content Item Identifier. We can use it in our Razor Page to load a blog post.
- Edit the
BlogPost.cshtmlRazor Page like this:
@page "/blogpost/{id}" @inject OrchardCore.IOrchardHelper Orchard @{ var blogPost = await Orchard.GetContentItemByIdAsync(Id); } <h1>This is the blog post: @blogPost.DisplayText</h1> @functions { [FromRoute] public string Id { get; set; } }
- In the Content Items page, click on the blog post we created in the previous section.
- Find the part of the url after
/ContentItems/, which is
4tavbc16br9mx2htvyggzvzmd3in the following screenshot:
- Open the url
/blogpost/[YOUR_ID]by replacing the
- The page should display the actual title of the blog post.
Accessing the other properties of a Content Item¶
In the previous section the
DisplayText property is used to render the title of the blog post. This property is common to every content items, so is the
ContentItemId or
Author for instance. However each Content Type defines a unique set of dynamic properties, like the Markdown Part that we added in the Content Modeling section.
The dynamic properties of a content item are available in the
Content property, as a Json document.
- Edit the Razor Page by adding the following lines after the title:
... <h1>This is the blog post: @blogPost.DisplayText</h1> @Orchard.ConsoleLog(blogPost) ...
- Re-open the Blog Post page with the content item id, then press F12 to visualize the Debugging tools from the browser, then open the Console. The state of the content item should be displayed like this:
All the properties of the current content item are displayed, including the
Content property which contains all the dynamic parts we have configured for the Blog Post content type.
Expanding the
MarkdownBodyPart node reveals the
Markdown field with the content of the blog post.
- Edit the Razor Page to inject this code:
... <h1>@blogPost.DisplayText</h1> <p>@blogPost.Content.MarkdownBodyPart.Markdown</p> @Orchard.ConsoleLog(blogPost) ...
Release packages
If you are not using the latest MyGet packages then this
ConsoleLog method is not available and the project won't compile. You can then skip this line.
- Refresh the blog post page to reveal the Markdown text.
- Finally, we can process the Markdown content and convert it to HTML with this code:
<p>@await Orchard.MarkdownToHtmlAsync((string) blogPost.Content.MarkdownBodyPart.Markdown)</p>
Even though we can load blog posts from their Content Item Id, this is not user friendly and a good SEO optimization is to reuse the title in the URL.
In Orchard Core CMS the Alias Part allows to provide a custom user friendly text to identify a content item.
- In the admin section of the site, open Content Definition > Content Types > Blog Post
- At the bottom of the page, select Add Parts
- Select Alias and click Save
- Move Alias under Title and save
- Edit the blog post, the Alias text box is now displayed, in which you can enter some text. In this example we'll use
new-day
We can now update the Razor Page to use the alias instead of the content item id, in both the URL and in the way we load the content item.
- Change the Razor Page with the following code:
@page "/blogpost/{slug}" @inject OrchardCore.IOrchardHelper Orchard @{ var blogPost = await Orchard.GetContentItemByHandleAsync($"alias:{Slug}"); } ... @functions { [FromRoute] public string Slug { get; set; } }
Release packages
If you are not using the latest MyGet packages then this method is called
GetContentItemByAliasAsync(string alias).
The changes consist in using the
slug name in both the route and the local property, and also use a new method to load a content item with an alias.
- Open the page
/blogpost/new-daywhich should display the exact same result, but using a more SEO and user friendly url.
Generating the slug using a custom pattern¶
Skip on dev
This step is unnecessary if you use the packages from the MyGet feed, or tge source code from the dev branch. If you still follow these steps you'll notice the configuration is already defined.
The Alias Part provides some custom settings in order to let it be generated automatically. In our case we want it to be generated from the Title, automatically. To provide such patterns the CMS uses a templating language named Liquid, together with some custom functions to manipulate content items properties.
- Edit the content definition of Blog Post, and for the Alias Part click on Edit.
- In the Pattern textbox enter
{{ ContentItem.DisplayText | slugify }}, then click Save.
This is dynamically extract the
DisplayText property of a content item, in our case the Title, and call the
slugify on this values, which will turn the title in a value that can be used in slugs.
- Edit the blog post content item
- Clear the Alias textbox. This will allow the system to generate it using the custom pattern we defined.
- Click Publish (and continue).
The alias is now
this-is-a-new-day:
- Open the URL
/blogpost/this-is-a-new-dayto confirm that the route still works with this auto-generated alias.
Assignment
Create a new Blog Post with a Title and verify that the alias is auto-generated, and that it can be displayed using its own custom url.
Configuring the Preview feature for Blog Posts¶
One very useful feature for the users who will have to edit the content is called Preview. If you try to edit a blog post and click on the Preview button, a new window will open with a live preview of the currently edited values.
- While editing an existing blog post, click on Preview, and snap the new windows on the side.
- Edit the Title while the preview windows is visible, and notice how the result updated automatically.
The CMS doesn't know what Razor Page to use when rendering a content item, and will use a generic one instead. However, the same way we provided a pattern for generating an alias, we can provide a pattern to invoke a specific page for previewing a content item.
- Edit the content definition of Blog Post, click Add Parts, then select Preview. Click Save.
- In the list of parts, for Preview, click on Edit to change its settings for this content type.
- In the Pattern textbox, enter
/blogpost/{{ ContentItem.Content.AliasPart.Alias }}which is the way to generate the same URL as the route which is configured in the Razor page.
- Click Save and open the preview of a Blog Post while editing it.
As you can see the preview is now using the specific route we set up for displaying a Blog Post, and editors have a full fidelity experience when editing the content.
Suggestion
A dedicated template can also be used for previews, which would provide hints for editors, or detect mistakes, and render them in the preview window. Users can also change the size of the window to test the rendering on different clients.
Summary¶
In this tutorial we have learned how to
- Start a new Orchard Core CMS project
- Create custom content types
- Edit content items
- Create Razor Pages with custom routes to render then content
- Load content items with different identifiers
- Render wysiwyg preview screens while editing the content | https://orchardcore.readthedocs.io/en/dev/docs/guides/decoupled-cms/ | 2020-03-28T23:07:43 | CC-MAIN-2020-16 | 1585370493121.36 | [array(['images/custom-preview.jpg', 'Final Result'], dtype=object)
array(['images/new-project.jpg', 'New project'], dtype=object)
array(['images/home.jpg', 'Setup'], dtype=object)
array(['images/setup.jpg', 'Setup'], dtype=object)
array(['images/new-content-type.jpg', 'New Content Type'], dtype=object)
array(['images/add-content-parts.jpg', 'Add Content Parts'], dtype=object)
array(['images/edit-content-type.jpg', 'Edit Content Type'], dtype=object)
array(['images/edit-markdownbody.jpg', 'Edit Markdown Body Type'],
dtype=object)
array(['images/edit-blogpost.jpg', 'Edit Blog Post'], dtype=object)
array(['images/content-items-1.jpg', 'Content Items'], dtype=object)
array(['images/content-item-id.jpg', 'Content Item id'], dtype=object)
array(['images/blogpost-id.jpg', 'Blog Post by Id'], dtype=object)
array(['images/console-log.jpg', 'Console Log'], dtype=object)
array(['images/edit-alias.jpg', 'Edit Alias'], dtype=object)
array(['images/alias-pattern.jpg', 'Edit Alias Pattern'], dtype=object)
array(['images/generated-alias.jpg', 'Generated Alias'], dtype=object)
array(['images/this-is-a-new-day.jpg', 'This Is A New Day'], dtype=object)
array(['images/preview-editor.jpg', 'Preview Editor'], dtype=object)
array(['images/preview-pattern.jpg', 'Edit Preview Pattern'], dtype=object)
array(['images/custom-preview.jpg', 'Custom Preview'], dtype=object)] | orchardcore.readthedocs.io |
The SELECT and FROM Clauses
An introduction to the
SELECT and
FROM clauses and their related keywords
SELECT and
FROM
The first clause in any query is the
SELECT clause. The
SELECT clause contains either a list of the columns you want returned from the query separated by a comma, or the wildcard
*. The second clause in the query is the
FROM clause. The
FROM clause indicates against which table to run the query.
The wildcard
* when used after
SELECT means that all the columns in the table should be returned and they are presented in the order in which they are found in the original table.
The query to see everything in the Intakes table would look like:
SELECT * FROM austin_animal_center_intakes
And the first seven rows of the result would look like this:
SELECTand
FROMare both capitalized. This is a convention, not a requirement. The SQL parser is case-insensitive so both upper and lower cases work for keywords. The clauses are also on different lines–another convention. SQL has a lot of conventions. You, however, don’t need to worry about them as data.world autocompletes your keywords and source names for you, and auto-formats the entire query in a very readable, industry-standard format when you run it–an extremely handy feature as you’ll see next when you go to enter specific column names.
Okay, we got a report with all of the data in the table by running a
SELECT * query. But what if we didn’t want all of that information? What if we only wanted to see what type of animal, which sex, what age, and what condition they were in on intake? That query would look like this:
SELECT animal_type, sex_upon_intake, age_upon_intake, intake_condition FROM austin_animal_center_intakes
And the first several rows of the results would render as:
SELECT *query). This feature is very handy for presenting information in order of importance for your purposes, which may or may not be the way in which it was originally captured.
SELECT AS
Another way you can change the presentation of the data–gussy it up, if you will–is to replace the column names in the dataset with something more readable or meaningful to your purpose. In the last example, our query returned data on the type, sex, age and condition of the animals. If we wanted to rename the columns to match those names in our results we would select the same columns, but we would use the keyword
AS after each column name followed by the column name we would like to see.
AS introduces the column name you would like to see in the results of a query.
To get the column names written the way we want, we would write the query thusly:
SELECT animal_type AS Animal, sex_upon_intake AS Sex, age_upon_intake AS Age, intake_condition AS Condition FROM austin_animal_center_intakes
And the table we’d get back would look like:
ASwith nonnumeric characters in the replacement column name (spaces, e.g.,) you have to surround the replacement text with backticks. The backtick (`) is the keyboard character on the same key as the tilda (~)–not to be confused with the single quote character.
If instead of replacing the column names with one-word titles we wanted to just take out the underscores, we’d need to write the query using backticks:
SELECT animal_type AS `animal type`, sex_upon_intake AS `sex upon intake`, age_upon_intake AS `age upon intake`, intake_condition AS `intake condition` FROM austin_animal_center_intakes
Resulting in:
Adding capitalization to our changes would yield:
SELECT animal_type AS `Animal Type`, sex_upon_intake AS `Sex Upon Intake`, age_upon_intake AS `Age Upon Intake`, intake_condition AS `Intake Condition` FROM austin_animal_center_intakes
And the resulting table would look like:
SELECT DISTINCT
The dataset we are using for our examples is very large. There were 54,724 animals taken in by the Austin Texas Animal Center in the almost three years covered by the dataset. If you wanted to know the different types of animals that were taken in (were they all dogs and cats, e.g.,) you could scan through all the data, or you could use the nifty little modifier for the select clause called
DISTINCT.
DISTINCT is used in a
SELECT clause to return unique combinations of data across all the columns returned by the query.
The query for distinct animal types would be written like this:
SELECT DISTINCT animal_type FROM austin_animal_center_intakes
Returning:
DISTINCTcan be a bit tricky to use, but if you keep in mind that what is distinct is the combination of data for a single row across all of the columns displayed, then you are less likely to get tripped up.
If you wanted to show only the unique animal types, but you want to see the columns for age and sex as well you might construct a query like this:
SELECT DISTINCT animal_type, sex_upon_intake, age_upon_intake FROM austin_animal_center_intakes
It is a legitimate query so you wouldn’t get an error, but the resulting table would not have distinct values for just animal type. Instead the values considered to be distinct would be each combination of type, sex, and age. The
DISTINCT qualifier refers to a unique row of data, not a unique column. This is what the first few rows of the results would look like:
For this dataset the number of records (rows) returned was reduced to 539 of the original 75,947.
An introduction to the
LIMIT clause. | https://docs.data.world/documentation/sql/concepts/basic/SELECT_and_FROM.html | 2018-06-18T05:55:07 | CC-MAIN-2018-26 | 1529267860089.11 | [] | docs.data.world |
Many.
Prerequisites
Verify that you have theprivilege on the virtual machine.
Verify that you are familiar with storage controller behavior and limitations. See SCSI and SATA Storage Controller Conditions, Limitations, and Compatibility.
Procedure
- Right-click a virtual machine in the inventory and select Edit Settings.
- On the Virtual Hardware tab, select SCSI Controller from the New device drop-down menu and click Add.
The controller appears in the Virtual Hardware devices list.
- On the Virtual Hardware tab, expand New SCSI Controller, and select the type of sharing in the SCSI Bus Sharing drop-down menu.
- Select the controller type from the drop-down menu.
Do not select a BusLogic Parallel controller for virtual machines with disks larger than 2TB. This controller does not support large capacity hard disks.
- Click OK.
What to do next
You can add a hard disk or other SCSI device to the virtual machine and assign it to the new SCSI controller. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.vm_admin.doc/GUID-9D15FD68-6CA1-4CBA-A451-D326BAAB07C9.html | 2018-06-18T05:56:48 | CC-MAIN-2018-26 | 1529267860089.11 | [] | docs.vmware.com |
Retrain a New Resource Manager based web service using the Machine Learning Management PowerShell cmdlets
When you retrain a New web service, you update the predictive web service definition to reference the new trained model.
Prerequisites
You must set up a training experiment and a predictive experiment as shown in Retrain Machine Learning models programmatically.
Important
The predictive experiment must be deployed as an Azure Resource Manager (New) based machine learning web service. To deploy a New web service you must have sufficient permissions in the subscription to which you deploying the web service. For more information, see Manage a Web service using the Azure Machine Learning Web Services portal.
For additional information on Deploying web services, see Deploy an Azure Machine Learning web service.
This process requires that you have installed the Azure Machine Learning Cmdlets. For information installing the Machine Learning cmdlets, see the Azure Machine Learning Cmdlets reference on MSDN.
Copied the following information from the retraining output:
- BaseLocation
- RelativeLocation
The steps you take are:
- Get the web service definition
- Export the Web Service Definition as JSON
- Update the reference to the ilearner blob in the JSON.
- Import the JSON into a Web Service Definition
- Update the web service with new Web Service Definition
You must first sign in to your Azure account from within the PowerShell environment using the Connect-AzureRmAccount cmdlet.
Get the Web Service Definition
Next, get the Web Service by calling the Get-AzureRmMlWebService cmdlet. The Web Service Definition is an internal representation of the trained model of the web service and is not directly modifiable. Make sure that you are retrieving the Web Service Definition for your Predictive experiment and not your training experiment.
$wsd = Get-AzureRmMlWebService -Name 'RetrainSamplePre.2016.8.17.0.3.51.237' -ResourceGroupName 'Default-MachineLearning-SouthCentralUS'
To determine the resource group name of an existing web service, run the Get-AzureRmMl, log on to the Microsoft Azure Machine Learning Web Services portal. Select the web service. The resource group name is the fifth element of the URL of the web service, just after the resourceGroups element. In the following example, the resource group name is Default-MachineLearning-SouthCentralUS.<subcription ID>/resourceGroups/Default-MachineLearning-SouthCentralUS/providers/Microsoft.MachineLearning/webServices/RetrainSamplePre.2016.8.17.0.3.51.237
Export the Web Service Definition as JSON
To modify the definition to the trained model to use the newly Trained Model, you must first use the Export-AzureRmMlWebService cmdlet to export it to a JSON format file.
Export-AzureRmMlWebService -WebService $wsd -OutputFile "C:\temp\mlservice_export.json"
Update the reference to the ilearner blob in the JSON.. This updates the path to reference the new trained model.
"asset3": { "name": "Retrain Samp.le [trained model]", "type": "Resource", "locationInfo": { "uri": "" }, "outputPorts": { "Results dataset": { "type": "Dataset" } } },
Import the JSON into a Web Service Definition
You must use the Import-AzureRmMlWebService cmdlet to convert the modified JSON file back into a Web Service Definition that you can use to update the Web Service Definition.
$wsd = Import-AzureRmMlWebService -InputFile "C:\temp\mlservice_export.json"
Update the web service with new Web Service Definition
Finally, you use Update-AzureRmMlWebService cmdlet to update the Web Service Definition.
Update-AzureRmMlWebService -Name 'RetrainSamplePre.2016.8.17.0.3.51.237' -ResourceGroupName 'Default-MachineLearning-SouthCentralUS' -ServiceUpdates $wsd
Summary
Using the Machine Learning PowerShell management cmdlets, you can update the trained model of a predictive Web Service enabling scenarios such as:
- Periodic model retraining with new data.
- Distribution of a model to customers with the goal of letting them retrain the model using their own data. | https://docs.microsoft.com/en-us/azure/machine-learning/studio/retrain-new-web-service-using-powershell | 2018-06-18T05:52:26 | CC-MAIN-2018-26 | 1529267860089.11 | [] | docs.microsoft.com |
NoLineBreaksAfterKinsoku Class
Custom Set of Characters Which Cannot End a Line.When the object is serialized out as xml, its qualified name is w:noLineBreaksAfter.
Inheritance Hierarchy
System.Object
DocumentFormat.OpenXml.OpenXmlElement
DocumentFormat.OpenXml.OpenXmlLeafElement
DocumentFormat.OpenXml.Wordprocessing.NoLineBreaksAfterKinsoku
Namespace: DocumentFormat.OpenXml.Wordprocessing
Assembly: DocumentFormat.OpenXml (in DocumentFormat.OpenXml.dll)
Syntax
'Declaration Public Class NoLineBreaksAfterKinsoku _ Inherits OpenXmlLeafElement 'Usage Dim instance As NoLineBreaksAfterKinsoku
public class NoLineBreaksAfterKinsoku : OpenXmlLeafElement
Remarks
[ISO/IEC 29500-1 1st Edition]
17.15.1.58 noLineBreaksAfter (Custom Set of Characters Which Cannot End a Line)
This element specifies the set of characters which shall be restricted from ending a line for runs of text which shall be subject to custom line breaking logic using the kinsoku element (§17.3.1.16) when the contents of the document are displayed. This constraint shall only apply to text which has been flagged in the language of this rule via the lang element (§17.3.2.20) or automatic detection methods outside the scope of ISO/IEC 29500.
If this element is omitted, then no custom set of characters shall be used to restrict the characters which can end a line when using the kinsoku element.
[Example: Consider a paragraph of WordprocessingML text displayed as follows, with the dollar symbol $ was flagged as Japanese content using the following WordprocessingML in the run properties:
<w:r> <w:rPr> <w:lang w: </w:rPr> <w:t>$</w:t> </w:r>
This text is displayed and the resulting first line ends with the dollar sign symbol. If this character must not be used to end a line, that requirement would be specified as follows in the document settings:
<w:noLineBreaksAfter w:
The noLineBreaksAfter element's val attribute has a value of ja-JP, specifying that all dollar signs in this document which are marked as Japanese text must not be allowed to end a line. This means that the dollar sign character must therefore be moved to the next line as it can no longer be the last character on a line:
end example]
[Note: The W3C XML Schema definition of this element’s content model (CT_Kinsoku) is located in §A.1. end note]
© ISO/IEC29500: 2008.
Thread Safety
Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
See Also
Reference
NoLineBreaksAfterKinsoku Members
DocumentFormat.OpenXml.Wordprocessing Namespace | https://docs.microsoft.com/en-us/previous-versions/office/developer/office-2010/cc880907(v=office.14) | 2018-06-18T06:10:09 | CC-MAIN-2018-26 | 1529267860089.11 | [array(['images/cc880907.documentformatopenxmlwordprocessingnolinebreaksafterkinsoku-image001%28en-us%2coffice.14%29.jpg',
'DocumentFormat.OpenXml.Wordprocessing.NoLineBreaks DocumentFormat.OpenXml.Wordprocessing.NoLineBreaks'],
dtype=object)
array(['images/cc880907.documentformatopenxmlwordprocessingnolinebreaksafterkinsoku-image002%28en-us%2coffice.14%29.jpg',
'DocumentFormat.OpenXml.Wordprocessing.NoLineBreaks DocumentFormat.OpenXml.Wordprocessing.NoLineBreaks'],
dtype=object) ] | docs.microsoft.com |
You are viewing an older version of this topic. To go to a different version, use the version menu at the upper-right.
Jetty SSL Transport
The Jetty SSL transport works exactly the same way as the HTTPS Transport Reference.
For example, the following configuration specifies the Jetty-SSL connectors in the first flow and the HTTPS connector in the second:
If you do not need this level of security, you can use the Jetty Transport Reference instead. | https://docs.mulesoft.com/mule-user-guide/v/3.7/jetty-ssl-transport | 2018-06-18T05:21:18 | CC-MAIN-2018-26 | 1529267860089.11 | [] | docs.mulesoft.com |
About
Trigger List
Each item in the list represents an individual trigger. You can add or remove them by using the
+ and
- buttons on the bottom of the pane. The columns are as follows:
- You may use the checkbox to turn a trigger on or off
-
- The Type column has an image indicating what type of trigger it is
- Finally, the Trigger column has a description of the trigger required to activate the command, be it a keystroke, mouse event, etc.
Some triggers (such as mouse triggers) require use of the inspector drawer to set its properties. To access the drawer click the
i button in the bottom-right corner of the pane. | http://docs.blacktree.com/quicksilver/preferences/trigger_preferences | 2009-07-04T01:41:32 | crawl-002 | crawl-002-008 | [] | docs.blacktree.com |
Trace: » extra_scripts
About
The plug-in enables a set of system shell/AppleScripts that emulate some OS functions like restarting and process actions.
Usage and Detailed Documentation
The following scripts and executables are available from the plug-in.
- Classic Shutdown Shuts down Classic environment
- Close Disk Tray Applicable for non slot-loading optical drive
- Eject Eject a disc from the optical drive
- Empty Trash Empties the trash
- Fast Logout
- Force Restart Restart the computer without prompting the user
- Force Shutdown Shut down the computer without prompting the user
- Get External IP Displays all external IPs of the computer if the computer is behind a router
- Get IP Displays all local IPs of the computer
- Hide Others Hide all other application except for the currently active application
- Lock Screen
- Max Volume (60%)
- Mid Volume (40%)
- Min Volume (20%)
- Mute Volume
- Quit Visible Apps
- Restart Restart the computer
- Show All Display all currently active applications on the desktop
- Show Character Palette
- Show Keyboard Viewer
- Shut Down Shut down the computer
- Sleep Puts the computer to sleep
- Switch to Root
- Sync Now
- Toggle Audio Input
- Toggle Audio Output
- Type Clipboard
- top 10 Displays the top 10 processes in terms of CPU usage
Invoke Quicksilver, type the name of the command and set the appropriate action. The default action should be sufficient.
Requirements
- Quicksilver | http://docs.blacktree.com/quicksilver/plug-ins/extra_scripts | 2009-07-04T01:43:03 | crawl-002 | crawl-002-008 | [array(['http://blacktree.com/style/images/sectionicons/docs-icon.png',
None], dtype=object) ] | docs.blacktree.com |
Quicksilver Manual > Plug-ins > Shelf plug-in
The plug-in provides a shelf for temporary storage. The shelf acts like a stack. Newly added items are placed right at the top of the shelf so you can tell in what order items were placed on the shelf. The shortcut key for invoking the shelf is
⌥⌘S.
Usage and Detailed Documentation
When the shelf is initially empty for the first time, it won't be listed in the Catalog preference pane. The moment items are placed on the shelf and the
Quicksilver > Shelf & Clipboard (Catalog) has been rescanned, the shelf would be listed as a scanned object. The catalog rescanning interval is dependent on the value set for
Rescan catalog in the Catalog preference pane via Quicksilver's preferences. To rescan it manually, select
Shelf & Clipboard (Catalog) as the direct object in the first pane and set the action to
Rescan Catalog Entry in the second pane. Press
enter/return to execute the action.
- Put on Shelf Place the selected item on the shelf.
- Invoke Quicksilver
- Select an item as the direct object in the first pane
- Set the action to
Put on Shelfin the second pane
enter/returnto execute the action
- Show Display the shelf on the screen.
- Invoke Quicksilver
Shelfas the direct object in the first pane
- Set the action to
Showin the second pane
enter/returnto execute the action
- To view a list of all items on the shelf:
- Invoke Quicksilver
Shelfas the direct object in the first pane and press the right-arrow key
Requirements
- Quicksilver
Known Issues
- An excessively large shelf can result in sluggish system performance and RAM gobbling. Refer to this thread on the QS forums. | http://docs.blacktree.com/quicksilver/plug-ins/shelf | 2009-07-04T01:45:36 | crawl-002 | crawl-002-008 | [array(['http://blacktree.com/style/images/sectionicons/docs-icon.png',
None], dtype=object) ] | docs.blacktree.com |
Alternative resources for jQuery Documentation / API Helpers
The API listing has all JQuery methods documented in an alphabetic list. If you remember that there's an "is()" method, but not that it's in the Traversing section, this is for you. Similar to the following, but simpler.
Cheat sheets are available as a quick reference to the jQuery API, which can then be printed off. There is one available in HTML format (jQuery 1.1) or a more detailed one (which contains code samples) in Excel format or Google Spreadsheets format.
Browse the jQuery API, even when offline. For Mac OS X v10.4 (Tiger).
Lookup any jQuery function in the API Browser from any application with Sean O's jQueryHelp (Windows only)
Click jQuery 1.0.3 under 'AJAX and Frameworks'. When you visit the site again, it will remember that you have selected it.
The documentation is taken from and a table of contents has been added automatically (using python). | http://docs.jquery.com/Alternative_Resources | 2009-07-04T03:58:15 | crawl-002 | crawl-002-008 | [] | docs.jquery.com |
Revert the most recent 'destructive' operation, changing the set of matched elements to its previous state (right before the destructive operation).
If there was no destructive operation before, an empty set is returned. A 'destructive' operation is any operation that changes the set of matched jQuery elements, which means any Traversing function that returns a jQuery object - including add, andSelf, children, filter, find, map, next, nextAll, not, parent, parents, prev, prevAll, siblings, and slice - plus the clone, appendTo, prependTo, insertBefore, insertAfter, and replaceAll functions (from Manipulation). | http://docs.jquery.com/Traversing/end | 2009-07-04T04:00:14 | crawl-002 | crawl-002-008 | [] | docs.jquery.com |
TrackProductKey
FlexNet Manager Suite 2022 R1 (On-Premises)
Command line | Registry
TrackProductKey determines whether the inventory component
reports product keys from Microsoft Installer (MSI) packages as part of software inventory.
Normal behavior is to include the MSI keys; but this preference is set false in the tracker
command line used for high-frequency hardware inventory checks to support subcapacity
calculations for IBM PVU licenses.
Values
Command line
Registry
FlexNet Manager Suite (On-Premises)
2022 R1 | https://docs.flexera.com/FlexNetManagerSuite2022R1/EN/GatherFNInv/SysRef/FlexNetInventoryAgent/topics/PMD-TrackProductKey.html | 2022-08-07T22:53:52 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.flexera.com |
Software Download Directory
frevvo Latest - This documentation is for frevvo v10.3. v10.3 is a Cloud Only release. Not for you? Earlier documentation is available too.
frevvo makes it easy to select layouts and styling for your projects, portals and forms/workflows. The designer can pick from four different layouts and four global styles where you can specify colors, font name and other properties. You cannot customize layouts but you can create custom styles.
This Expense Report uses the Compact Layout and Aqua Style.
See the CSS Class property for more form:.
This image shows the same panel when the Tight layout is selected for the form. Notice the changes in the vertical and horizontal spacing and the rounded input control corners are replaced by square corners.
This image shows the same panel when the Nouveau Square layout is selected for the form. The Nouveau Square layout has the vertical and horizontal spacing of the Nouveau layout with square corners for input controls. and user., workflow or project. Styles must be manually moved from your test/development server to your production environment. You do this by selecting Download from the menu for each style you want to move from your test/development server and then selecting Upload from the Add menu in the styles tab on your production server. Remember to upload the style at the same level on your production server as you had on your test/development server. You can also sort Styles alphabetically (A-Z or Z-A).
frevvo provides four styles that can be chosen from the Style dropdown choices on the Style tab of the Form Properties panel. You will not see Global Styles on the frevvo Style tab . Only styles created by designers and the initial tenant admin will display there.
The four Global Styles are: Blue, Neutral, Green, Aqua. You can use them for your forms, workflows and portals.
You can override the background gradient start and end colors for the dropdown, link, message, section, table header, trigger and required or optional upload controls.
Decorator colors for the link, trigger and upload controls can be overridden. Tab control background and border colors can also be changed.
There are three buttons at the bottom of the page:
The default font size for forms/workflows is 14. This cannot be changed.
Applying the Style below to a form displays the control colors as shown.
Control Override Section of a Style
Form with the Style containing the Control Override Colors shown applied to it/workflow. edit, download or delete, click on the style file will be downloaded. It is a properties file that can be edited using any text editor.Menu and select Download. Styles project, form, workflow and portal levels. If no style is selected, the default global style Blue is used. If no style is set for a form/workflow, it will inherit the style selected for the project. If a form/workflow is embedded in a portal, the portal style will take over even for forms/workflows where you selected a specific style. A style applied to a workflow will be inherited by all the forms contained within it.
Layouts can be selected for projects, forms and workflows but not portals., then select the Settings editing mode. portal using the URL parameter _styleId. This parameter take precedence over a style selected at the project or form level. Append _styleId=<style name> to the share link of your form/workflow or portal.. | https://docs.frevvo.com/d/display/frevvo103/Layouts+and+Styles | 2022-08-07T22:13:10 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.frevvo.com |
Profiling with TensorFlow¶
In compliance with the original guide Optimize TensorFlow performance using the Profiler, you can choose any one of the Profiling APIs to perform profiling. Using TensorBoard Keras Callback is the recommended API.
Add the text below to your training script:
#])
Note that the TensorBoard object was passed to the fit method. Make sure to specify the sequence of steps (batches) you want to profile while taking the limited capacity of your buffer (which collects the data in the Synapse Profiling Subsystem) into consideration.
Start the TensorBoard server in a dedicated terminal window:
$ tensorboard --logdir logs --bind_all --port=5990
In the example above, the listening port is set to 5990.
Open new window tab in your browser and check out your TensorBoard website:
Now you are ready to go and start your training.
The TensorBoard generates two kinds of information.
While your workload is being processed step by step (batch by batch), on the dashboard, you can monitor (online) the training process by tracking your model cost (loss) and accuracy.
Right after the last requested step was completed, the whole bunch of collected profiling data is analyzed (by TensorFlow) and submitted to your browser. No need to wait for the end of the training process.
Note
Carefully consider the number of steps you really need to profile and think of limited buffer size.
If needed, for buffer extension consult SynapseAI Profiler User Guide.
For vast majority of use cases, default settings are just good enough so that no special internal parameter adjustment is needed.
An error:
Unknown device vendormight appear. This error appears due to TensorFlow not recognizing Gaudi. It does not affect the performance and will be removed in future releases. | https://docs.habana.ai/en/latest/Profiling/Profiling_with_TensorFlow.html | 2022-08-07T21:28:29 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.habana.ai |
Cell¶
- class
pygsheets.
Cell(pos, val='', worksheet=None, cell_data=None)[source]¶
Represents a single cell of a sheet.
Each cell is either a simple local value or directly linked to a specific cell of a sheet. When linked any changes to the cell will update the
Worksheetimmediately.
borders= None¶
Border Properties as dictionary. Reference: api object.
parse_value= None¶
Determines how values are interpreted by Google Sheets (True: USER_ENTERED; False: RAW).
Reference: sheets api
horizontal_alignment¶
Horizontal alignment of the value in this cell. possible vlaues:
HorizontalAlignment
vertical_alignment¶
Vertical alignment of the value in this cell. possible vlaues:
VerticalAlignment
wrap_strategy¶
How to wrap text in this cell. Possible wrap strategies: ‘OVERFLOW_CELL’, ‘LEGACY_WRAP’, ‘CLIP’, ‘WRAP’. Reference: api docs
set_text_format(attribute, value)[source]¶
Set a text format property of this cell.
Each format property must be set individually. Any format property which is not set will be considered unspecified.
- Attribute:
- foregroundColor: Sets the texts color. (tuple as (red, green, blue, alpha))
- fontFamily: Sets the texts font. (string)
- fontSize: Sets the text size. (integer)
- bold: Set/remove bold format. (boolean)
- italic: Set/remove italic format. (boolean)
- strikethrough: Set/remove strike through format. (boolean)
- underline: Set/remove underline format. (boolean)
set_text_rotation(attribute, value)[source]¶
The rotation applied to text in this cell.
Can be defined as “angle” or as “vertical”. May not define both!
- angle:
[number] The angle between the standard orientation and the desired orientation. Measured in degrees. Valid values are between -90 and 90. Positive angles are angled upwards, negative are angled downwards.
Note: For LTR text direction positive angles are in the counterclockwise direction, whereas for RTL they are in the clockwise direction.
- vertical:
- [boolean] If true, text reads top to bottom, but the orientation of individual characters is unchanged.
Reference: api_docs <>__
unlink()[source]¶
Unlink this cell from its worksheet.
Unlinked cells will no longer automatically update the sheet when changed. Use update or link to update the sheet.
link(worksheet=None, update=False)[source]¶
Link cell with the specified worksheet.
Linked cells will synchronize any changes with the sheet as they happen.
refresh()[source]¶
Refresh the value and properties in this cell from the linked worksheet. Same as fetch.
update(force=False, get_request=False, worksheet_id=None)[source]¶
Update the cell of the linked sheet or the worksheet given as parameter. | https://pygsheets.readthedocs.io/en/stable/cell.html | 2022-08-07T21:28:32 | CC-MAIN-2022-33 | 1659882570730.59 | [] | pygsheets.readthedocs.io |
Date: Sun, 7 Aug 2022 23:11:08 +0000 (UTC) Message-ID: <[email protected]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_12812_218983248.1659913868959" ------=_Part_12812_218983248.1659913868959 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
First navigate to the Flows Home Page inside one of your applications. S= ee the Flows Home Page = for help navigating here.
We recommend that tenant admin users not create or edit flows, nor= have roles assigned to them. These users should restrict themselves to adm= inistrative tasks.
Once you are in a flows home page, start by clicking New
to add your first applic=
ation. Then click the name of the application or the Edit
icon and clic=
k New to create your first flow.
A unique and arbitrary flow name will be generated automatically--Flow 3=
1 for example. You'll change this name when you begin working on your flow =
to something meaningful like Employee On Boarding shown below.
The Flow Designer has several components. Each is described below.
On This Page:
The palette contains all of the forms in your application's form tab, a = new form, an HTTP and a Summary step. Click on the form you want and drag i= t into your flow. See adding steps to your flow for more information.
The Properties area shows the properties that are available for the enti= re flow or those available for flow steps.
Some properties, such as Task Information and Pending Message offe= r the designer the opportunity to define them on the flow level or customiz= e them for individual steps.
The right side of the Flow Designer is the work area for the flows you c= reate. At the top of this area is the toolbar.
The toolbar at the top of the flow work area is visible when you are des= igning your flows but it is not visible to you when you test your flows or = to your users when they access your flows. Notice the icons on the toolbar.= Hovering over the icons displays a tool tip which describes it"s function.= The toolbar includes the following icons:
The current version of a workflow is shown to the right of the flow titl=
e. When you create your flow, the version number will be zero. It will incr=
ease each time the flow is saved by clicking on the
save an=
d exit or save and test icons. This fea=
ture can help designers ensure they are working with the latest v=
ersion of the flow. | https://docs.frevvo.com/d/exportword?pageId=22448728 | 2022-08-07T23:11:08 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.frevvo.com |
Overview
In this document we will go over the additional features and modules that you are able to add onto your websites to improve their functionality. Modules are additional features (sometimes sold separately or sometimes parts of a bundle) that will add extra power to the engine of your website or give you new tools that help you manage your business more efficiently. Adding these modules will add an amount to your monthly fee but can assist you in increasing your sales.
Why would I use modules?
The base CMS tools that you get with the standard Getlocal Portal package are fantastic for starting your online sales journey. You will get a fantastic looking website that is very fast, mobile friendly and most importantly, able to sell directly to your customers. It is perfect for small companies or experience providers with around 10 products and offers them a way to shine. But as businesses grow, so do their needs. For example as your product portfolio grows you may find that pages with grouped product lists start to become confusing and so you want to offer a search engine tool to help customers find what they want. We have a module for that! You may find over time that you do great one off sales but are not getting much repeat business so you would like a way to send receipts to your guests with cross-selling options and promotions. We have a module for that.
We have used our knowledge and experience to group and price these offerings in a way that you only need to pay for the things you need for your business to grow and at a price that won't hold that growth back. | https://docs.getlocal.travel/concepts/modules-introduction | 2022-08-07T21:52:40 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.getlocal.travel |
.. include:: /Includes.rst.txt ====================================================================== Breaking: #81775 - suffix form identifier with the content element uid ====================================================================== See :issue:`81775` Description =========== If a form is rendered through the "form" content element, the identifier of the form is modified with a suffix. The form identifier will be suffixed with "-$contentElementUid" (e.g. "myForm-65"). Impact ====== All form element names within the frontend will change from e.g. .. code-block:: html to .. code-block:: html if the form is rendered through the "form" content element. Affected Installations ====================== All instances, that render forms through the "form" content element. .. index:: Frontend, ext:form, NotScanned | https://docs.typo3.org/c/typo3/cms-core/10.4/en-us/_sources/Changelog/9.0/Breaking-81775-ExtFormSuffixFormIdentifierWithContentElementUid.rst.txt | 2022-08-07T23:01:45 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.typo3.org |
Launch consoles from the Infrastructure page
A browser console permits you to access a remote server using a web browser. To launch a browser console from the Infrastructure page, perform the following steps:
Go to Infrastructure > Search.
In the Search field, type the OpsQL query. Example query:
$agent.installed IS "true"
From the list of resources displayed, select the target device, and click.
In the slide-out screen that appears, click the Actions and then, the Launch Remote Session buttons. In the Launch Remote Session window > Consoles field:
- Select a console from the existing list. (or)
- Add a new console by clicking the Add button and enter the following information:
- Name: Enter the name of the console.
- Connector: Enter the element (Agent or Gateway) managing the target device.
- Protocol: Protocol that you want to use to communicate with the target device.
The following protocols are supported:
- RDP
- SSH: The Password or Public Key option is available. Select the Public key.
- TELNET
- Username: Login username of the target device.
- Password: Login password of the target device.
- Confirm Password: Reenter the password.
- Port: Port that transmits and receives data. Enter 22 as the port value.
- IP address: IP Address of the target device.
Click Add after entering the information. A new console is added.
Select the newly added console. Enter the following information on the Launch Remote Session window:
- Activity Log Notes: Enter the activity log notes information. This is a mandatory field.
- Enable upload or download: Select to upload or download a file to the console.
- In the Credentials field: Select the applicable option for credentials.
- Use Credentials: Select the existing credentials or click Add to create new credentials. In the Add Credential window that appears, enter the following information:
- Name: Enter the name of the credentials.
- Description: Provide a description.
- Type: Select the protocol type.
- Authentication Type (mandatory field): The Password or Keypair option is available. Select the keypair and Upload via file option.
For the keypair option, uploading a file through the Upload via file option is mandatory.
For the password option, the username, password, and confirm password fields are mandatory.
- Pass Phrase: Enter the Pass Phrase.
- Username: Enter the username. (Mandatory, only if you select the password as the Authentication Type)
- Password: Enter the password. (Visible, only if you select the password as the Authentication Type)
- Confirm Password: Reenter the password. (Visible, only if you select the password as the Authentication Type)
- Port: Enter the port number - 22.
- Connection Timeout (ms): Enter a value. Click Add. A new set of credentials is added.
- I have Credentials: If you have the credentials, select the option, upload the public key, and enter a Pass Phrase. Private keys are required for SSH keypair-based authentication.
Click Launch.
The console will be launched. Enter your credentials in the browser console.
The target remote device is now connected, enabling activities to be performed on the device. | https://jpdemopod2.docs.opsramp.com/platform-features/feature-guides/remote-consoles/launching-browser-consoles-from-infrastructure/ | 2022-08-07T22:56:57 | CC-MAIN-2022-33 | 1659882570730.59 | [] | jpdemopod2.docs.opsramp.com |
Getting Updates¶
Back us on Patreon or GitHub Sponsors. Your continued support helps us provide regular updates and services like world maps. Thank you!
Docker Compose¶
Open a terminal and change to the folder where the
docker-compose.yml file is located.1
Now run the following commands to download the most recent image from Docker Hub and
restart your instance in the background:
docker-compose pull docker-compose stop docker-compose up -d
Pulling a new version can take several minutes, depending on your internet connection speed.
Advanced users can add this to a
Makefile so that they only have to type a single
command like
make update. See Command-Line Interface
to learn more about terminal commands.
Even when you use an image with the
:latest tag, Docker does not automatically download new images for you. You can either manually upgrade as shown above, or set up a service like Watchtower to get automatic updates.
Config Examples¶
We recommend that you compare your own
docker-compose.yml with our latest examples from time to time, as they may include new config options or other improvements relevant to you.
MariaDB Server¶
Our config examples are generally based on the latest stable release to take advantage of performance improvements. This does not mean older versions are no longer supported and you have to upgrade immediately.
If MariaDB fails to start after upgrading from an earlier version (or migrating from MySQL), the internal management schema may be outdated. See Troubleshooting MariaDB Problems for instructions on how to fix this.
Development Preview¶
You can test upcoming features and improvements by changing the image from
photoprism/photoprism:latest
to
photoprism/photoprism:preview in your
docker-compose.yml.
Then pull the most recent image and restart your instance as shown above.
Raspberry Pi¶
Our stable version and development preview have been built into a single multi-arch Docker image for 64-bit AMD, Intel, and ARM processors.
That means, Raspberry Pi 3 / 4, Apple Silicon, and other ARM64-based devices can pull from the same repository, enjoy the exact same functionality, and can follow the regular Installation Instructions after going through a short list of System Requirements and Architecture Specific Notes.
Try explicitly pulling the ARM64 version if you've booted your device with the
arm_64bit=1 flag
and you see the "no matching manifest" error on Raspberry Pi OS (Raspbian):
docker pull --platform=arm64 photoprism/photoprism:latest
If you do not have legacy software, we recommend choosing a standard 64-bit Linux distribution as this requires less experience. Alternative 32-bit Docker images are provided for ARMv7-based devices.
Darktable is not included in the ARMv7 version because it is not 32-bit compatible.
Face Recognition¶
Existing users may index faces without performing a complete rescan:
docker-compose exec photoprism photoprism faces index
Remove existing people and faces for a clean start e.g. after upgrading from our development preview:
docker-compose exec photoprism photoprism faces reset -f
Watchtower¶
Adding Watchtower as a service to your
docker-compose.yml will
automatically keep images up-to-date:
services: watchtower: image: containrrr/watchtower restart: unless-stopped volumes: - "/var/run/docker.sock:/var/run/docker.sock"
Users of our DigitalOcean 1-Click App have Watchtower pre-installed.
Caution
Automatic updates may interrupt indexing and import operations. Only enable Watchtower if you are comfortable with this.
Pure Docker¶
Open a terminal on your server, and run the following command to pull the most recent container image:
docker pull photoprism/photoprism:latest
See Running PhotoPrism with Docker for a command reference.
The default Docker Compose config filename is
docker-compose.yml. For simplicity, it doesn't need to be specified when running the
docker-composecommand in the same directory. Config files for other apps or instances should be placed in separate folders. ↩ | https://docs.photoprism.app/getting-started/updates/ | 2022-08-07T21:17:37 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.photoprism.app |
Shared Code¶
By default, PlatformIO does not build the main source code from the src_dir folder in pair with a test source code. If you have a shared/common code between your “main” and “test” programs, you have 2 options:
We recommend splitting the source code into multiple components and placing them into the lib_dir (project’s private libraries and components). Library Dependency Finder (LDF) will find and include these libraries automatically in the build process. You can include any library/component header file in your test or main application source code using
#include <MyComponent.h>.
See Local & Embedded: Calculator for an example, where we have a “calculator” component in the lib_dir folder and include it in the tests and the main application using
#include <calculator.h>.
NOT RECOMMENDED. Manually instruct PlatformIO to build the main source code from the src_dir folder in pair with a test source code using the test_build_src option in “platformio.ini” (Project Configuration File):
[env:myenv] platform = ... test_build_src = true
This is very useful if you unit test independent libraries where you can’t split source code.
Warning
Please note that you will need to use
#ifndef PIO_UNIT_TESTINGand
#endifguard to hide non-test related source code. For example, own
main(),
setup() / loop(), or
app_main()functions. | https://docs.platformio.org/en/stable/advanced/unit-testing/structure/shared-code.html | 2022-08-07T23:04:42 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.platformio.org |
For better security, you can use valid SSL certificate to access the gateway. When you access the gateway Web User interface, you receive a message about invalid certificate, but can still ignore and proceed to access.
The gateway web user interface runs on the Nginx web server and is shipped with the self-signed SSL certificate by default.
Because the certificate is issued to
gateway.opsramp.com and your domain is not part of
*.opsramp.com, the invalid certificate message is displayed.
To avoid the message, you need to upload a valid SSL certificate.
Upload SSL certificates
If your organization has a valid SSL certificate, you can upload your valid SSL certificate. Verify that the SSL certificate file is in
.pem format and contains the certificate and private key.
- Log into the gateway interface.
- From the left pane, click the Nginx SSL Configuration option.
- Upload your certificate on the SSL Certificate Configuration screen.
- Click Restart to apply the changes.
If you want to use the default gateway SSL certificate, click Reset.
Access gateways with SSL certificates
If your organization has a valid certificate issued to
*.yourcompany.com and your gateway is part of the domain and the hostname of the gateway is
host1, then you can access the gateway directly without using the IP address.
For example: Use to access the gateway. | https://jpdemopod2.docs.opsramp.com/platform-features/gateways/gateway-apply-ssl-certificate/ | 2022-08-07T21:20:59 | CC-MAIN-2022-33 | 1659882570730.59 | [] | jpdemopod2.docs.opsramp.com |
View source for Global configuration/uk
← J3.x:Global configuration/uk
You do not have permission to edit this page, for the following reasons below:
- Editing of this page is limited to registered doc users in the group: Email Confirmed.
- This page cannot be updated manually. This page is a translation of the page J3.x:Global configuration and the translation can be updated using the translation tool.
- You must confirm your email address before editing pages. Please set and validate your email address through your user preferences.
You can view and copy the source of this page.
Templates used on this page:
- Template:- (view source)
- Template:AmboxNew (view source) (protected)
- Template:CatInclude (view source) (protected)
- Template:Incomplete (view source)
- Template:Joomla version (view source)
- Template:Joomla version/layout (view source)
- Template:Last edited by (view source)
- Template:Max (view source)
- Template:Max/2 (view source)
- Template:Pagetype (view source)
- Template:Plural (view source)
- Template:Rarr (view source)
- Template:Time ago (view source)
- Template:Time ago/core (view source)
- Template:Toolbar (view source)
- Template:Translation language (view source)
- Template:Version-msg-latest-tooltip/uk (view source)
Return to J3.x:Global configuration/uk. | https://docs.joomla.org/index.php?title=J3.x:Global_configuration/uk&action=edit&oldid=305410 | 2022-08-07T22:06:08 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.joomla.org |
Part of every online store are of course the products that shall be sold. The configuration of those products and the management of those in the category tree structure are explained in this part of the documentation.
Also you find all information regarding special product forms like variants or esd products as well as productspecific basics like ratings, filters or suppliers here. | https://docs.shopware.com/en/shopware-5-en/products-and-categories | 2022-08-07T21:53:00 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.shopware.com |
You can start, stop, and restart the collectors from the Data Collectors tab.
To go the Collector Management page, navigate to Administration > Data Collector.
The My Collectors page lists the collector containers that you created.
Operations on Collectors
You can perform operations such as Start, Stop, Restart, Edit, and Delete on a collector by clicking the ellipsis (⋮) icon. When you click any of these buttons, the status of the operation is displayed and the list of collectors is refreshed. When you select the Delete operation, you must confirm your action for the collector to be deleted.
You can modify the configuration of a collector by selecting the Edit operation. The collector configuration page appears. After making the required changes, click the Update button. This action updates the configuration and then redirects you to the My Collectors page.
Collector Store
To view the list of available collector types, click Collector Store at the top-right corner of the My Collectors page. You are redirected to the Collector Store page where the list of all available collector types is available.
Collectors Grouped on Type
To list all the grouped collectors under a collector type, click the collector type.
When you click a specific collector type, you are redirected to the Create Collector page.
To create a collector, enter the required information and click Create. The My collectors page displays the collector that you have created. | https://docs.vmware.com/en/VMware-Telco-Cloud-Operations/1.0.1/vmware-tco-101-user-guide/GUID-EC14B3BF-305B-43EB-BB00-1902E5A7DBF8.html | 2022-08-07T23:11:53 | CC-MAIN-2022-33 | 1659882570730.59 | [array(['images/GUID-B7A26AA1-61A4-4E08-A6BF-8CBA11D648A1-low.png', None],
dtype=object)
array(['images/GUID-24019980-D51D-47A9-AD88-360FAD28F76A-low.png', None],
dtype=object)
array(['images/GUID-EB8D42C6-DA7E-4D2B-9437-BB354BF55201-low.png', None],
dtype=object)
array(['images/GUID-91100BCF-8C81-4577-9E31-DD842522E121-low.png', None],
dtype=object) ] | docs.vmware.com |
Autodiscovery can be used to discover SNMPv1 or v2c systems in IPv4 networks. Autodiscovery requires discovery filters. For systems that match a discovery filter and are added to the modeled topology, they too are probed for IP addresses of their neighbors. The autodiscovery cycle continues until no more new IP addresses match the discovery filters. The discovery filters, which are configurable, ensure that the IP Manager sweeps only the networks that the customer systems are configured to access. Preparing for autodiscovery consists of the following tasks as listed in Tasks to prepare for autodiscovery:
In the discussions, you will see several references to the Pending Devices list, Discovery Progress window, pending discovery, and full discovery for the IP Manager.
Chapter 3, Discovery, in the IP Manager Concepts Guide provides information about the Pending Devices list that appears in the Discovery Progress window, and “Scheduling automatic discovery” on page 85 provides information about pending discovery and full discovery. | https://docs.vmware.com/en/VMware-Telco-Cloud-Service-Assurance/2.0.0/ip-manager-user-guide/GUID-56D26349-E0F8-4BD2-A4E9-9A39951C7965.html | 2022-08-07T23:29:46 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.vmware.com |
Setting Cache Security Thresholds
When you implement data key caching, you need to configure the security thresholds that the caching CMM enforces.
The security thresholds help you to limit how long each cached data key is used and how much data is protected under each data key. The caching CMM returns cached data keys only when the cache entry conforms to all of the security thresholds. If the cache entry exceeds any threshold, the entry is not used for the current operation and it is evicted from the cache.
As a rule, use the minimum amount of caching that is required to meet your cost and performance goals.
The AWS Encryption SDK only caches data keys that are encrypted by using a key derivation function. Also, it establishes upper limits for the threshold values. These restrictions ensure that data keys are not reused beyond their cryptographic limits. However, because your plaintext data keys are cached (in memory, by default), try to minimize the time that the keys are saved . Also, try to limit the data that might be exposed if a key is compromised.
For examples of setting cache security thresholds, see AWS Encryption SDK: How to Decide if Data Key Caching is Right for Your Application in the AWS Security Blog.
Note
The caching CMM enforces all of the following thresholds. If you do not specify an optional value, the caching CMM uses the default value.
To disable data key caching temporarily, do not set the cache capacity or security thresholds to 0. Instead, use the null cryptographic materials cache (NullCryptoMaterialsCache) that the AWS Encryption SDK provides. The NullCryptoMaterialsCache returns a miss for every get request and does not respond to put requests. For more information, see the SDK for your programming language.
- Maximum age (required)
Determines how long a cached entry can be used, beginning when it was added. This value is required. Enter a value greater than 0. There is no maximum value.
The LocalCryptoMaterialsCache tries to evict cache entries as soon as possible after they reach the maximum age value. Other conforming caches might perform differently.
Use the shortest interval that still allows your application to benefit from the cache. You can use the maximum age threshold like a key rotation policy. Use it to limit reuse of data keys, minimize exposure of cryptographic materials, and evict data keys whose policies might have changed while they were cached.
- Maximum messages encrypted (optional)
Specifies the maximum number of messages that a cached data key can encrypt. This value is optional. Enter a value between 1 and 2^32 messages. The default value is 2^32 messages.
Set the number of messages protected by each cached key to be large enough to get value from reuse, but small enough to limit the number of messages that might be exposed if a key is compromised.
- Maximum bytes encrypted (optional)
Specifies the maximum number of bytes that a cached data key can encrypt. This value is optional. Enter a value between 0 and 2^63 - 1. The default value is 2^63 - 1. A value of 0 lets you encrypt empty message strings.
The first use of each data key (before caching) is exempt from this threshold. Also, to enforce this threshold, requests to encrypt data of unknown size, such as streamed data with no length specifier, do not use the data key cache.
The bytes in the current request are included when evaluating this threshold. If the bytes processed, plus current bytes, exceed the threshold, the cached data key is evicted from the cache, even though it might have been used on a smaller request. | https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/thresholds.html | 2019-01-16T06:15:08 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.aws.amazon.com |
Agent Desktop
Agent Desktop lets contact center agents communicate with customers and team members through channels such as calls, chats, and email.
You can
- respond to or contact customers through the channels assigned to you
- get help from team members
- find standard responses to customer questions
- make sure that you are meeting your center's expectations
Ready? Watch the video for a quick tour of Agent Desktop, and then get started.
Looking for answers to specific questions? Try these topics:
Lost? See Navigating Agent Desktop.
Tip
The pictures and videos in this Help document show native Genesys Agent Desktop. Your company might have customized many features including corporate logos and the name of the product. This document uses the name Agent Desktop to mean the application that you use to handle calls and other interactions, and to manage your work and your contacts.
This page was last modified on June 5, 2018, at 07:15.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/PSAAS/latest/Agent/AgentDesktop | 2019-01-16T05:59:04 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.genesys.com |
Use the datastore file browser to move files or folders to a new location, either on the same datastore or on a different datastore.
About this task
Note:
Virtual disk files are moved and copied without format conversion. If you move a virtual disk to a datastore on a type of host that is different from the type of the source host, you might need to convert the virtual disks before you can use them.
Prerequisites
Required privilege:
Procedure
- Click Storage in the VMware Host Client inventory and click Datastores.
- Click File browser.
- Select the target datastore.
- Select the file or folder that you want to move to another location and click Move.
- Select your target destination and click Move.
- Click Close to exit the file browser. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.html.hostclient.doc/GUID-04353CD9-8F7D-49BC-9FEC-121603ABBD84.html | 2019-01-16T06:02:38 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.vmware.com |
Magento Commerce only
The default email reminder template can be customized, and additional templates created for different promotions. Email reminders have a selection of specific variables that can be incorporated into the message. The information in these variables is determined by the email reminder rule that you set up, and by the cart price rule that is associated with the coupon. The Insert Variable button can be used to insert the markup tag with the variable into the template. To learn more, see Email.
Preview of Promotion Reminder
Customize an Email Reminder Template
On the Admin sidebar, go to Marketing > Communications > Email Templates.
Click Add New Template.
In the Template list under Magento_Reminder, choose the Promotion Notification/Reminder template.
Click Load Template.
Follow the standard instructions to customize the template. | https://docs.magento.com/user-guide/marketing/email-reminder-templates.html | 2020-07-02T19:18:53 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.magento.com |
The Knowledge Base (KB) is a compiled collection of files, such as event processing rules, class definitions, and executables, organized in a directory structure. The KB files are loaded by a cell at start time. The KB instructs the cell how to format incoming event data, process received events, and display events in a console. Although many KBs can exist within a distributed environment, each cell can be associated with only one KB at a time.The KB is similar to a script, and the cell is the engine that runs the script.
Managing cells
Developing | https://docs.bmc.com/docs/display/tsim107/Cell+KB+Reference+Guide | 2020-07-02T19:49:02 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.bmc.com |
Msmq
Binding Base. Max Received Message Size Property
Definition
Gets or sets the maximum size, in bytes, for a message that is processed this binding. The default value is 65,536 bytes.
Remarks
This bound on message size is intended to limit exposure to Denial of Service (DoS) attacks. | https://docs.microsoft.com/en-us/dotnet/api/system.servicemodel.msmqbindingbase.maxreceivedmessagesize?view=netframework-4.8 | 2020-07-02T19:43:52 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.microsoft.com |
Code
Pages Encoding Provider Class
Definition
Provides access to an encoding provider for code pages that otherwise are available only in the desktop .NET Framework.
public ref class CodePagesEncodingProvider sealed : System::Text::EncodingProvider
public ref class CodePagesEncodingProvider sealed
public sealed class CodePagesEncodingProvider : System.Text.EncodingProvider
[System.Security.SecurityCritical] public sealed class CodePagesEncodingProvider
type CodePagesEncodingProvider = class inherit EncodingProvider
type CodePagesEncodingProvider = class
Public NotInheritable Class CodePagesEncodingProvider Inherits EncodingProvider
Public NotInheritable Class CodePagesEncodingProvider
- Inheritance
- CodePagesEncodingProvider
- Inheritance
-
- Attributes
-
Remarks
The .NET Framework for the Windows desktop supports a large set of Unicode and code page encodings. .NET Core, on the other hand, supports only the following encodings:.
Other than code page 20127, code page encodings are not supported.
The CodePagesEncodingProvider class extends EncodingProvider to make these code pages available to .NET Core. To use these additional code pages, you do the following:
Add a reference to the System.Text.Encoding.CodePages.dll assembly to your project.
Retrieve a CodePagesEncodingProvider object from the static CodePagesEncodingProvider.Instance property.
Pass the CodePagesEncodingProvider object to the Encoding.RegisterProvider method.
After an EncodingProvider object is registered, the encodings that it supports are available by calling the overloads of Encoding.GetEncoding; you should not call the EncodingProvider.GetEncoding overloads. | https://docs.microsoft.com/en-us/dotnet/api/system.text.codepagesencodingprovider?view=netcore-3.1 | 2020-07-02T19:34:32 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.microsoft.com |
In most cases handlers are meant to modify the internal state of an application based on the received message. In a messaging system it is critical to make sure the state change is persisted exactly once. The scenarios below discuss in detail how NServiceBus transaction and persistence settings affect the way business data is stored.
Synchronized storage session
Synchronized storage session is NServiceBus's built-in implementation of Unit of Work pattern. It provides a data access context that is shared by all handlers that process a given message. The state change is committed after the execution of the last handler, provided that there were no exceptions during processing. The synchronized storage session is accessible via
IMessageHandlerContext:
public Task Handle(MyMessage message, IMessageHandlerContext context) { var session = context.SynchronizedStorageSession .MyPersistenceSession(); //Business logic return Task.CompletedTask; }
The synchronized storage session feature is supported by most NServiceBus persistence packages:
Synchronized storage session by itself only guarantees that there will be no partial failures i.e. cases where one of the handlers has modified its state while another has not. This guarantee extends to sagas as they are persisted using the synchronized storage session.
However, the synchronized storage session does not guarantee that each state change is persisted exactly once. To ensure exactly-once message processing, synchronized storage session has to support a de-duplication strategy.
Message de-duplication strategies
NServiceBus supports multiple message de-duplication strategies that suit a wide range of message processing and data storage technologies.
Local transactions
SQL Server transport is unique among NServiceBus transports as it allows using a single SQL Server transaction to modify the application state and send/receive messages. Both NHibernate and SQL persistence automatically detect if the message processing context contains an open transaction. If this transaction points to a database that the persister is configured to use, the synchronized storage session wraps that transport transaction. As a result, the state changes requested by the handlers are committed atomically when consuming the incoming message and sending all outgoing messages. This guarantees that no duplicate messages are created in the system.
SQL persistence can be used in the shared local transaction mode in
SendsAtomicWithReceive and
TransactionScope transaction modes. NHibernate persistence can only shared the transaction context with the transport if configured to use
TransactionScope transaction mode. In both cases, however, the transaction will not be escalated to a distributed transaction.
Distributed transactions
Distributed transactions are atomic and durable transactions that span multiple transactional resources (like databases or queues). By enlisting both transport and persistence into the same distributed transaction NServiceBus can guarantee exactly-once message processing by preventing duplicate messages from being created.
Distributed transactions are supported by the following transport and persistence components:
- SQL persistence
- NHibernate persistence
- SQL Server transport
- MSMQ transport through distributed transactions
In order to use this mode, the transport must be configured to use
TransactionScope mode. When using the SQL Server transport, the distributed transaction mode allows using separate SQL Server instances for message stores (queues) and for business data.
Outbox
The outbox is a pattern that provides exactly-once message processing experience even when dealing with transports and databases that don't support distributed transactions, such as RabbitMQ and MongoDB. This is done by storing the incoming message ID and the outgoing messages in the same transaction as the business state change.
The outbox can be used with any transport and with any persistence component that supports synchronized storage sessions.
Instead of preventing the duplicates, the outbox detects them and ensures that the effects of processing duplicate messages are ignored and not persisted.
Manual de-duplication
In situations where neither of the built-in de-duplication strategies can be applied, the de-duplication of messages must be handled at the application level, in the message handler itself. In these cases the synchronized storage session should not be used and each handler should guarantee the idempotence of its behavior.
Idempotence caveats
Message-processing logic is idempotent if it can be applied multiple times and the outcome is the same as if it were applied once. The outcome includes both the application state changes and the potential outgoing messages sent. Consider the following pseudocode that demonstrates how not to implement idempotent message handling:
public async Task Handle(MyMessage message, IMessageHandlerContext context) { if (IsDuplicate(message)) { return; } await context.Send(new MyOutgoingMessage()); await ModifyState(); }
and think about the behavior of the message processing:
- NServiceBus by default defers sending messages until the message handler has finished so the behavior of the code above is as if the call to
Sendwas after the call to
ModifyState
- If outgoing messages are sent before the state change is committed (e.g if the code above used immediate dispatch) there is a risk of creating ghost messages -- messages that carry the state change that has never been made durable
- If outgoing messages are sent after the state change is committed there is risk of message loss if the send operation fails. To prevent this, the outgoing messages must be re-sent even if it appears to be a duplicate
- If re-sending messages is implemented, multiple copies of the same message may be sent to the downstream endpoints
- If message identity is used for de-duplication, message IDs must be generated in a deterministic manner
- If outgoing messages depend on the application state, the code above is incorrect when messages can get re-ordered (e.g. by infrastructure failures, recoverability or competing consumers) | https://particular-docs.azurewebsites.net/nservicebus/handlers/accessing-data?version=core_6 | 2020-07-02T18:59:04 | CC-MAIN-2020-29 | 1593655879738.16 | [] | particular-docs.azurewebsites.net |
Directory Structure of WSO2 Products¶
All WSO2 products are built on top of the Carbon platform. The directory structure described below is the structure that is inherited by all Carbon-based WSO2 products. However, note that each product may contain folders and files that are specific to the product, in addition to what is described below.
Tip
Top
<PRODUCT_HOME> refers to the root folder of the WSO2 product distribution.
<PROFILE_HOME> refers to the root directory of other profiles that are shipped as separate runtimes with a product. | https://apim.docs.wso2.com/en/3.0.0/reference/guides/directory-structure-of-wso2-products/ | 2020-07-02T19:39:48 | CC-MAIN-2020-29 | 1593655879738.16 | [] | apim.docs.wso2.com |
.js. After showing you how to deploy your application in a Kubernetes cluster using the Bitnami Node.js Helm chart (which includes MongoDB by default), this guide also explores how to modify the source code and publish a new application release in Kubernetes using the Helm CLI.
Assumptions and prerequisites
This guide focuses on deploying a custom MEAN application in a Kubernetes cluster. This guide makes the following assumptions:
- You have already built a custom MEAN application and have the application source code in a GitHub repository.
- You have a Kubernetes cluster running with Helm v3.x installed.
- You have the kubectl command line (kubectl CLI) installed.
This guide assumes that you already have your MEAN application code in GitHub but, if you don't, you can fork this example application. This example application uses MEAN and is referenced in the Create a MEAN Stack Google Map App tutorial.
Step 1: Adapt the application source code
As a first step, you must adapt your application's source code to expose the MongoDB parameters as environment variables so that you can later connect the application with the Bitnami Node.js Helm chart. The steps below explain how to do this for the example application which you should have forked into your own repository.
Edit the app/config.js file so that it looks like this and then save your changes:
module.exports = { bitnami: { name: "MongoDB Service", url: "mongodb://" + process.env.DATABASE_USER + ":" + process.env.DATABASE_PASSWORD + "@" + process.env.DATABASE_HOST + "/" + process.env.DATABASE_NAME, port: process.env.DATABASE_PORT } };
Change the MongoDB connection string in your custom application in a similar manner.
Next, edit the server.js file and update the connection URL as follows:
... // Express Configuration // ---
Change the connection URL in your custom application in a similar manner if needed.
Check and update application dependencies to ensure it is using the latest and most stable version of each package by editing the package.json file. For the example application, update the dependencies as below:
"dependencies": { "body-parser": "^1.19.0", "express": "^4.17.1", "jsonwebtoken": "^5.0.2", "method-override": "^3.0.0", "mongoose": "^5.6.1", "morgan": "^1.9.1" }
Step 2: Deploy the example application in Kubernetes
Before performing the following steps, make sure you have a Kubernetes cluster running, and that Helm and Tiller are installed correctly. For detailed instructions, refer to our starter tutorial.
Once your application code has been adapted, the next step is to deploy it on your Kubernetes cluster. Bitnami's Node.js Helm chart makes it very easy to do this, because it has built-in support for Git repositories and also installs MongoDB by default. So, to deploy the example application using the current Helm chart, follow these steps:
Make sure that you can to connect to your Kubernetes cluster by executing the command below:
kubectl cluster-info
Deploy the application by executing the following (Replace the my-mean-app example name with the name you want to give your application and the GitHub repository URL with the correct URL for your application repository):
helm install my-mean-app --set repository= bitnami/node
This will create two pods within the cluster, one for the MongoDB service and one for the application itself.
Once the chart has been installed, you will see a lot of useful information about the deployment. The application won't be available until database configuration is complete. Run the kubectl get pods command to see pod status and get a list of running pods:
kubectl get pods
To obtain the application URL, run the commands shown in the "Notes" section. For example, the command below will allow you to access the application by browsing to the URL on your host:
kubectl port-forward --namespace default svc/my-mean-app-node 8080:80
Here is an example of what you should see:
Congratulations! You've successfully deployed your MEAN application on Kubernetes.
HTML5 Geolocation is a technology that offers location services for web applications. It's included by default on all major browsers, but for security reasons, it may require user action to work. Select the option "Always allow on this site" in Google Chrome or click "Allow Location Access" in Mozilla Firefox.
Step 3: Update the source code and re-deploy the application
To re-deploy a modified version of the application, you need to carry out a few basic steps: change your application source code, commit your changes and update the chart.
Edit the file public/index.html and in the Header-Title section, change the page title
<title>The Scotch MEAN MapApp</title>
to
<title>My new MEAN MapApp</title>
Execute the helm upgrade command to update your Helm chart:
helm upgrade my-mean-app bitnami/node --recreate-pods
Refresh your browser and you will see the modified title on the application welcome page:
Follow these steps every time you want to update and re-release your application.
Useful links
To learn more about the topics discussed in this guide, use the links below: | https://docs.bitnami.com/tutorials/deploy-mean-application-kubernetes-helm/ | 2020-07-02T20:16:59 | CC-MAIN-2020-29 | 1593655879738.16 | [array(['/tutorials/_next/static/images/node-mean-3-65baddeab195fe0ff849c65ab84a0fa1.png',
'Access application URL'], dtype=object)
array(['/tutorials/_next/static/images/node-mean-4-8f519e364320084d59a690d7d3a01521.png',
'Access application release'], dtype=object) ] | docs.bitnami.com |
August 2012 Recommendations
Each.
- Join the East Region partner call on Friday, August 10 at 9:00am Eastern Time to learn how to get ready for new partner opportunities.
-.
- Starting September 5 at 11:00am Eastern Time, participate in Microsoft Community Connections Office Hours. Learn how you can take part in MCC to increase your local presence and sales.
-. | https://docs.microsoft.com/en-us/archive/blogs/uspartner_eastregion/august-2012-recommendations | 2020-07-02T20:06:23 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.microsoft.com |
Recipys¶
Welcome to Recipys, the project of two CS students and friends who enjoy cooking!
This project is on github and is hosted by readthedocs.org.
If you have suggestions for a recipie and are familiar with git, use the template to get started.
If you are unfamiliar with git, feel free to open an issue on the issue tracker with the details of the recipe and one of us will add it to the repo.
Also feel free to make note of any typographical errors, dead links or suggestions for existing recipes!
Enjoy! | https://recipys.readthedocs.io/en/latest/ | 2020-07-02T18:23:54 | CC-MAIN-2020-29 | 1593655879738.16 | [] | recipys.readthedocs.io |
Using Parent Organizations
You can optionally set up your multi-tenant environment with an organizational hierarchy that uses parent organizations at the top level and has regular organizations grouped as children under those parent organizations.
Although organizations don't have to be associated with parent organizations, implementing parent organizations can be useful because Commander administrators and Service Portal users can perform analysis and reporting tasks on the aggregated data of all of the VMs and services owned by the child organizations that belong to a parent organization. (For information on how to assign ownership to VMs and services, see Assigning Service Ownership.)
Note: Unlike regular organizations, parent organizations can't be assigned quota, ownership, or workflow assignments.
Parent Organization view in Service Portal
Administrators can use parent organizations to delegate analysis and reporting tasks for the aggregated data of children organizations to Service Portal users without providing them administrative powers. The Parent Organization view provides a Service Portal user a limited view for searching and reporting purposes only (for example, a user can't access the Dashboard, Service Catalog, or Service Request pages). Also for some pages that the user has access to, no administrative actions are available (for example, the My Resources tab doesn't offer Recommendations, Actions, Reset OS drop downs and buttons). However, with this view, a user does have sufficient access to run some reports, search for VMs owned by the child organizations or by themselves, and run some cost analytics.
In this topic:
Parent organization example
For an example of how to use parent organizations, you could create an "Engineering" parent organization and add "Development", "QA", and "IT" organizations as children to it. Then you could create a "Service" parent organization and add "Sales", "Support" and "Training" organizations as children to it.
Note: When you use parent organizations, you may only assign a child organization to one parent organization. For example, "Support" can't be a child of both "Engineering" and "Service". Also, parent organizations can only have regular organizations as children — you can't assign parent organizations to other parent organizations.
With this organizational structure, if the user "devmanager" belongs to the "Engineering" parent organization, "devmanager" can sign in to the Service Portal. With the "Engineering" parent view, "devmanager" will be able to see cost analytics and perform searches for the VMs and services owned by the child organizations "Development", "QA", and "IT" organizations. The data that's returned is aggregated across all child organizations that belong to the "Engineering" parent in this example.
Note: If a user belongs to a parent organization but not a child organization, they won't be able to switch to a child organization, but they will still see the aggregate of its data.
Users may belong to one or more parent organizations (for example, the user "devmanager" may belong to both "Engineering" and "Service"). If a user belongs to multiple parent organizations, when they sign in to the Service Portal, those users will be able to switch between the assigned parent organizations. In the example below, the "devmanager" is currently viewing data for the "Engineering" parent organization but can switch to the view for the "Service" parent organization.
Notice that the views that parent organizations provide are in addition to those users have as members of other organizations. That is, users that are members of other organizations will also be able to view their specific services and all organization services depending on the Service Portal permissions they are granted.
Adding parent organizations and assigning children
You can add new parent organizations and assign child organizations and/or members to them.
To add a new parent organization or edit an existing parent organization:
- Click the Parent Organizations tab.
- On the Parent Organizations page, click Add.
- In the Configure Parent Organization dialog, provide a descriptive name for the organization (for example, "Engineering").
Notes:
- It's possible for a parent organization to have the same name as a child organization because they are separate objects. However, to avoid confusion, you should use distinct names.
- Although you can click Finish at this point to create a parent organization with no child organizations or members, you'll typically add them at this point. To add child organizations and/or new users or groups as members of the parent organization, continue following the steps below.
- To configure which organizations you want to belong to the parent organization, do the following:
- To add child organizations to the parent organization, click Add. Then in the Add Child Organization dialog, select one or more existing organizations, and click Add.
Child organizations may only be assigned to one parent. Therefore, if any of the organizations you want to add already belong to a parent organization, you'll be prompted to confirm that you want to change the current parent organization assignment.
- To remove child organizations already associated with the parent organization, select one or more organizations, and click Remove.
- To add groups or users that have already been added to Commander, click Add Existing User. In the displayed dialog, select one or more groups and users, then click Add.
Note: You can enable the Primary contact of this organization option for a group or user that's added to the parent organization. When this option is enabled, you can configure workflows to automatically send emails to the organization member. The most common reason for emailing a primary contact of a parent organization is for a service request approval. If there are multiple primary contacts for each parent organization, multiple individuals will automatically receive approval emails.
- To remove groups and/or users that have been added to a parent organization, select one or more users and groups, click Delete User. In the Remove User dialog, click Yes.
Caution: If this user doesn't have another role, the user will be completely deleted from the system. (It's also possible to delete your own account.)
If the user is a member of another organization or has an individual role outside of this organization, the user will be removed from this organization but won't be deleted from the system.
Deleting parent organizations
To delete an organization:
- Click the Parent Organizations tab, then select a listed parent organization and click Delete.
Caution: Deleting a parent organization also deletes any of its members who don't have another role. To prevent this, do one of the following before deleting the parent organization: Assign these members an individual role from Configuration > Identity and Access, or add these members to another organization.
Creating reports that filter on parent organization
When you add parent organizations, a Parent Organization advanced filter will become available for searches and reports in Commander. This filter allows you to create reports that will provide information on the aggregated data of all the organizations that belong to the parent organization.
Some of the reports that you can use the parent organization filter on include:
- Cloud Billing Report
See Example: Generating a Cloud Billing report for parent organizations for information on how to use the parent organization filter with this report.
- VM Billing Report
- Software Summary Report
- Guest OS Disk Usage Report
- Guest OS Summary Report
- Offline VM Aging Costs Report
- Offline VM Aging Savings Report
- Over Provisioned Disk Summary Report
- Reclamation Report
- Snapshot Summary Report
- VM Comparative Economics Report
- VM Population Trending Report
Example: Generating a Cloud Billing report for parent organizations
One of the reasons for implementing parent organizations is to able to generate billing record reports for parent organizations that include all of billing data for the parent's child organizations. This can be especially useful if you add a large number of child organizations under a parent because you won't have to generate billing reports for each of the child organizations and then manually combine the data.
The following example shows how a parent organization filter can be used in a Cloud Billing Report to view the aggregated billing data for all of the parent organization's child organizations.
- Go to Reports > Chargeback and IT Costing > Cloud Billing.
- On the Report Generator pane, for Cloud Type, select whether you want to report on public and private clouds, just public clouds, or just private clouds.
Leave the default value of All Cloud Types because the child organizations that the report will cover own both public and private clouds. The resulting report will show the billing for all of the VMs.
- On the Report Generator pane, for Organization, select All VMs/Services.
When this option is enabled, the report won't be limited to a specific organization. Instead, it will cover all of the VMs and services owned by the child organizations for the parent organization.
- Click Add Advanced Filters.
- From Select a Property, select Parent Organization Name, leave the default operator equals, then select the parent organization that you want to generate a report for.
In the following Report Generator setup, the "Engineering" parent organization contains the "Development", "QA", and "IT" child organizations.
- Click Generate.
A billing report is generated that displays the aggregated billing data for the "Development", "QA", and "IT" child organizations of the parent organization "Engineering".
- To view the report, select it from the Generated Reports list, and click View. | https://docs.embotics.com/commander/using-parent-organizations.htm | 2020-07-02T18:22:11 | CC-MAIN-2020-29 | 1593655879738.16 | [array(['Images/parent-org-roles.png', 'Parent Org Views'], dtype=object)] | docs.embotics.com |
[ ) ]
Neutron multiple SDNs¶
Blueprint: neutron-multiple-sdns
Problem Description¶
Currently OpenStack-Helm supports OpenVSwitch as a network virtualization engine. In order to support many possible backends (SDNs), changes are required in neutron chart and in deployment techniques. OpenStack-Helm can support every SDN solution that has Neutron plugin, either core_plugin or mechanism_driver.
The Neutron reference architecture provides mechanism_drivers OpenVSwitch (OVS) and linuxbridge (LB) with ML2 core_plugin framework.
Other networking services provided by Neutron are:
L3 routing - creation of routers
DHCP - auto-assign IP address and DNS info
Metadata- Provide proxy for Nova metadata service
Introducing a new SDN solution should consider how the above services are provided. It may be required to disable the built-in Neutron functionality.
Proposed Change¶
To be able to install Neutron with multiple possible SDNs as networking plugin, Neutron chart should be modified to enable installation of base services with decomposable approach. This means that operator can define which components from base Neutron chart should be installed, and which should not. This plus proper configuration of Neutron chart would enable operator to flexibly provision OpenStack with chosen SDN.
Every Kubernetes manifest inside Neutron chart can be enabled or disabled. That would provide flexibility to the operator, to choose which Neutron components are reusable with different type of SDNs. For example, neutron-server which is serving the API and configuring the database can be used with different types of SDN plugin, and provider of that SDN chart would not need to copy all logic from Neutron base chart to manage the API and database.
The proposes change would be to add in
neutron/values.yaml new section
with boolean values describing which Neutron's Kubernetes resources should be
enabled:
manifests: configmap_bin: true configmap_etc: true daemonset_dhcp_agent: true daemonset_l3_agent: true daemonset_metadata_agent: true daemonset_ovs_agent: true daemonset_ovs_db: true daemonset_ovs_vswitchd:
Then, inside Kubernetes manifests, add global if statement, deciding if given
manifest should be declared on Kubernetes API, for example
neutron/templates/daemonset-ovs-agent.yaml:
{{- if .Values.manifests.daemonset_ovs_agent }} # Licensed under the Apache License, Version 2.0 (the "License"); ... - name: libmodules hostPath: path: /lib/modules - name: run hostPath: path: /run {{- end }}
If
.Values.manifests.daemonset_ovs_agent will be set to false, neutron
ovs agent would not be launched. In that matter, other type of L2 or L3 agent
on compute node can be run.
To enable new SDN solution, there should be separate chart created, which would handle the deployment of service, setting up the database and any related networking functionality that SDN is providing.
Use case¶
Let's consider how new SDN can take advantage of disaggregated Neutron services architecture. First assumption is that neutron-server functionality would be common for all SDNs, as it provides networking API, database management and Keystone interaction. Required modifications are:
Configuration in
neutron.confand
ml2_conf.ini
Providing the neutron plugin code.
The code can be supplied as modified neutron server image, or plugin can be
mounted to original image. The
manifests section in
neutron/values.yaml
should be enabled for below components:
manifests: # neutron-server components: configmap_bin: true configmap_etc:
Next, Neutron services like L3 routing, DHCP and metadata serving should be considered. If SDN provides its own implementation, the Neutron's default one should be disabled:
manifests: daemonset_dhcp_agent: false daemonset_l3_agent: false daemonset_metadata_agent: false
Provision of those services should be included inside SDN chart.
The last thing to be considered is VM network virtualization. What engine does SDN use? It is OpenVSwitch, Linux Bridges or l3 routing (no l2 connectivity). If SDN is using the OpenVSwitch, it can take advantage of existing OVS daemonsets. Any modification that would be required to OVS manifests can be included in base Neutron chart as a configurable option. In that way, the features of OVS can be shared between different SDNs. When using the OVS, default Neutron L2 agent should be disabled, but OVS-DB and OVS-vswitchd can be left enabled.
manifests: # Neutron L2 agent: daemonset_ovs_agent: false # OVS tool: daemonset_ovs_db: true daemonset_ovs_vswitchd: true
Alternatives¶
Alternatives to decomposable Neutron chart would be to copy whole Neutron chart and create spin-offs with new SDN enabled. This approach has drawbacks of maintaining the whole neutron chart in many places, and copies of standard services may be out of sync with OSH improvements. This implies constant maintenance effort to up to date.
Implementation¶
Testing¶
First reasonable testing in gates would be to setup Linux Bridge and check if VM network connectivity is working.
Documentation Impact¶
Documentation of how new SDN can be enabled, how Neutron should be configured. Also, for each new SDN that would be incorporated, the architecture overview should be provided. | https://docs.openstack.org/openstack-helm/latest/id/specs/neutron-multiple-sdns.html | 2020-07-02T19:31:50 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.openstack.org |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.