content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Voronoi treemap
Overview
In a Voronoi treemap, you can see the values of the columns you add, portrayed as a tessellation of polygons whose proportions depend on a numeric column you choose. These polygons may be subdivided into smaller polygons and constitute a hierarchical structure with as many levels as those into which the data is divided (there is a legend above the chart explaining the hierarchy of data).
What data do I need for this widget?
The option to create this chart will be disabled unless your query contains at least two columns, one of them with numeric values. Furthermore, to show meaningful content on the chart, you must group your data by at least two keys using a no-time option. Also, it is highly advisable to add some aggregation functions to provide mathematical significance to the variables you want to analyze.
If you grouped using a time option, the diagram will only show the data for the latest period available for the time range specified.
Creating a Voronoi treemap
Here we describe how to create this chart using examples. Let's go step by step through the process:
Go to Data Search and open the required table.
Perform the required operations to get the data you want to use in the chart.
- Click the gear icon on the toolbar and select Charts → Diagram → Voronoi treemap.
Click and drag the column headers to the corresponding fields.
The Voronoi treemap is displayed. This is a visual depiction of the average response length of the connections to Devo in each city and with each response time over one day. the average response length (avg_responseLength column) per city over the last day and comparison of three of the cities.
- Press G and select Knoxville and Alcorcon by pressing CTRL + clicking the cells. Move the mouse over the Madrid cell to compare it to the other ones.
The cells selected will appear at the lower part of the right panel while the cell over which you hover will appear at the top, showing in both cases their information and aggregated values. The one at the top will be compared to those at the bottom, showing their differences in red or green (fewer or worse).
- Hit the G key again when you finish to go back and remove the comparison panel to the right.
Example 2
Visualization of the average response length (avg_responseLength column) per city over the last day, ordered by size and colored using a spectrum based on the average response length (avg_responseLength column).
- Press D to apply the Squarified Ordered visualization.
- Press N to calculate cell size using their weight.
- Select avg_responseLength in the Color by field.
The spectrum of colors shows the average response length (maximum is red, minimum is green) and the cell size represents also the average response length (the larger they are, the longer the response). With this visualization, we establish a correlation between size and color, making it easier to spot potential problems by looking at big red cells.
Query example
You can recreate the example explained above with the data from the following query and mapping the fields as follows:
from siem.logtrust.web.activity group by city, responseTime every - select avg(responseLength) as avg_responseLength, count() as count
The following video shows how to create and use a Voronoi chart to analyze your data. It does so by comparing the two different versions available in Devo (Data Search and Activeboards) so that you can choose the best for you. | https://docs.devo.com/confluence/ndt/v7.7.0/searching-data/working-in-the-search-window/generate-charts/voronoi-treemap | 2022-05-16T18:41:24 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.devo.com |
Security
Password and certificate management are central to Jamf Connect's functionality. Jamf Connect uses standards-based technologies to connect to Active Directory or single sign-on (SSO). These interactions are completed with Kerberos, LDAP, and secure URL session connections with your cloud identity provider (IdP). Jamf Connect uses the versions of these tools included with macOS.
To obtain a better understanding of these interactions and ensure your information is secure, review the all security measures in Jamf Connect.
Disclaimer
Jamf Connect may allow users with the same username and password to log in to the incorrect local account. To ensure users can only log in to their account, a multifactor authentication (MFA) method is recommended. Jamf does not accept any responsibility or liability for any damages or security exploitations due to identically provisioned account credentials
Passwords
When a user logs in with Jamf Connect, their password is entered in a secure text field and never written to a disk outside the macOS keychain.
When Kerberos is used, the password is used with the
gss_aapl_initial_cred() API call, which authenticates the user and obtains a ticket granting ticket (TGT). When changing passwords, the same process is followed using the
gss_aapl_change_password() API call. Both API calls leverage Apple's implementation of Heimdal Kerberos.
All Kerberos actions are performed with Apple's APIs. The password is never cached with a "kinit" or other Kerberos command line interface (CLI) tools.
When integrated with Okta as an IdP, Jamf Connect uses the Okta Authentication API.
When integrated with other IdP providers, such as Azure AD, Jamf Connect uses the OpenID Connect (OIDC) protocol to communicate with the IdP.
All network connections are made using the macOS URL-loading API,
URLSession. All communications are secured with TLS to ensure they are not corrupted.
When Jamf Connect is finished with your password, the value is overwritten and deallocated.
For more information about OpenID Connect, see the following FAQ documentation from the OpenID Foundation.
For more information about the Okta Authentication API, see the Authentication API developer documentation from Okta.
Keychain Usage
If instructed, Jamf Connect stores the user’s password in their local keychain by using
SecKeychainAddGenericPassword()and other
SecKeychain API calls. The password is stored in the user's default keychain.
If a password is stored in the user's keychain and Kerberos is enabled, Jamf Connect will use that password on launch. If the password cannot authenticate the user, Jamf Connect deletes the password from the keychain to prevent lockouts. The user is then prompted to re-enter a valid password.
Active Directory Security
Jamf Connect does not require any security settings to be changed in Active Directory. Jamf Connect only uses SASL-authenticated binds when interacting with Active Directory. By default, Jamf Connect uses the user’s Kerberos ticket to encrypt any LDAP traffic with Active Directory. Jamf Connect can be configured to use SSL in addition to LDAP connections to Active Directory.
Certificates
Jamf Connect does not send user secrets to any service. The private key that generates your certificate signing request (CSR) never leaves the keychain. The process is only completed within Jamf Connect using Apple-provided
SecKeychain and
SecCertificate API calls.
Private keys are marked as non-exportable by default; however, you can use a preference key to change this setting.
When sending the CSR to a Windows certificate authority (CA), Kerberos authentication is used, and the CSR is sent via SSL. The resulting signed public key is retrieved using Kerberos and SSL and then matched with the private key in the keychain.
Network Connections
Jamf Connect does not search for incoming connections. All communications from Jamf Connect are outbound and use the highest available level of transport security.
User Space
Consider the following:
Jamf Connect runs in user space and not as root. Therefore, it has no more privileges than the currently logged in user.
Jamf Connect's functionality is the same for local administrator and and standard macOS users. | https://docs.jamf.com/jamf-connect/2.4.1/documentation/Security.html | 2022-05-16T19:15:43 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.jamf.com |
Storage creation with SnapDrive for UNIX
You can use SnapDrive for UNIX to create LUNs, a file system directly on a LUN, disk groups, host volumes, and file systems created on LUNs.
SnapDrive for UNIX automatically handles all the tasks needed to set up LUNs associated with these entities, including preparing the host, performing discovery mapping, creating the entity, and connecting to the entity you create. You can also specify which LUNs SnapDrive for UNIX uses to provide storage for the entity you request.
You do not need to create the LUNs and the storage entity at the same time. If you create the LUNs separately, you can create the storage entity later using the existing LUNs. | https://docs.netapp.com/us-en/snapdrive-unix/aix/concept_storagecreation_with_snapdrive_forunix.html | 2022-05-16T18:10:52 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.netapp.com |
[−][src]Crate rsevents
rsevents is an implementation of WIN32's auto- and manual-reset events for the rust world.
Events are synchronization primitives (i.e. not implemented atop of mutexes) used to either
create other synchronization primitives with or for implementing signalling between threads.
Events come in two different flavors:
AutoResetEvent and
ManualResetEvent. Internally,
both are implemented with the unsafe [
RawEvent] and use the
parking_lot_core crate to take
care of efficiently suspending (parking) threads while they wait for an event to become
signalled.
An event is a synchronization primitive that is functionally the equivalent of an (optionally gated) waitable boolean that allows for synchronization between threads. Unlike mutexes and condition variables which are most often used to restrict access to a critical section, events are more appropriate for efficiently signalling remote threads or waiting on a remote thread to change state. | https://docs.rs/rsevents/latest/rsevents/ | 2022-05-16T17:40:56 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.rs |
Installing it unnecessary to reboot your macOS devices after updating or uninstalling the Collector.
Applies to macOS platform primary Appliance:
Log in to the Web Console of the primary Appliance as admin.
Select the Appliance tab at the top of the Web Console.
Click Collector management in the left-hand side menu. Engine (as specified as External DNS name of the Engine in the Web Console).
TCP port number for the connection of the Collector with the Appliances (default 443).
Optional: UDP port number where the Engine is listening for the Collector (default 999, but prefer TCP-only). (recommended) Data over TCP: Tick to send all Collector data through the TCP channel. Ticked by default.
TCP port: Type in the port number on which the Appliance listens for TCP connections from the Collector. Default is 443. A custom TCP port must be in the non-privileged range (port number above 1024).
UDP port: Type in the port number on which the Appliance listens for UDP connections from the Collector. Only enabled if Data over TCP is not checked.
Still on Personalization, configure the proxy settings of the Collector:
Tick Automatic proxy for the Collector to take its configuration from a proxy auto-configuration (PAC) file.
In PAC address, type in the URL of the file that determines the proxy to use.
Tick Manual proxy for the Collector to use the following proxy settings:
Address: Type in the FQDN or IP address of the proxy.
Port: Type in the port number where the proxy is listening for connections.
In a second step, configure the other settings of the Collector:
Customer Key: Copy and paste the contents of the the file that holds the Customer Key of the primary Appliance.
Root CA: Copy and paste the contents of the file that holds the default Root Certificate of the primary Appliance. If you leave this field empty, the Collector assumes that you replaced the server certificates in the Engine and falls back to using the Keychain Access for verifying the certificates presented by the Appliance. You must replace the certificates to communicate via the default TCP port 443.
Optional Collector tag: Type in an integer number (0 to 2147483647) that identifies a group of Collectors. The Collector tag is visible in the Finder and is useful for defining the entities to build up hierarchies.
Optional Collector string tag: Type in a label (max 2048 characters) that identifies a group of Collectors. The Collector string tag is visible in the Finder and is useful for defining the entities to build up hierarchies.
Optional: Tick Assignment service if you activated the rule-based assignment of Collectors.
Optional: Tick Nexthink Engage to activate the features that let you engage with the end user via campaigns (requires the purchase of the Nexthink Engage product).
Optional: Select the execution policy of scripts included in remote actions:
Disabled (default): the Collector runs no remote action on the device.
Unrestricted: the Collector runs any remote action on the device, regardless of the digital signature of its script.
Trusted publisher: the Collector runs on the device only those remote actions with a Bash script that is signed by a Mac identified developer.
Trusted publisher or Nexthink: the Collector runs on the device only those remote actions with a Bash script that is signed either by Nexthink or by a Mac identified developer.:
address: the FQDN or IP address of the Appliance.
port: the port number of the UDP channel in the Appliance.
tcp_port: the port number of the TCP channel in the Appliance.
rootca: the path to the Root Certificate.
key: the path to the Customer Key file
(Optional) engage: whether to enable the Engage campaigns or not (default is disable).
(Optional) data_over_tcp: whether to enable the sending of all data over the TCP channel (default is enable).
(Optional) use_assignment: whether to enable automatic collector assignment (default is disable).
(Optional) ra_execution_policy: whether to enable the Act remote actions or not with the following options:
disabled (default)
The Collector runs no remote action on the device.
unrestricted
The Collector runs any remote action on the device, regardless of the digital signature of its script.
signed_trusted
The Collector runs on the device only those remote actions with a Bash script that is signed by a Mac identified developer.
signed_trusted_or_nexthink
The Collector runs on the device only those remote actions with a Bash script that is signed either by Nexthink or by a Mac identified developer.
(Optional) proxy_pac_address: provide the URL of a PAC address for automatic configuration of proxy settings.
(Optional) proxy_address: provide the FQDN or IP address of a proxy for manual configuration of proxy settings.
(Optional) proxy_port: provide the port number where a proxy is listening for connections for manual configuration of proxy settings.
(Optional) tag: integer number (0 to 2147483647) to identify an individual or batch installation of Collectors.
(Optional) string_tag: label (max 2048 characters) to identify an individual or batch installation of Collectors.
For instance:
sudo ./csi -address <appliance_address>
-port <appliance_udp_port> -tcp_port <appliance_tcp_port>
-rootca <root_certificate_file> -key <customer_key_file>
-engage enable -data_over_tcp disable
-proxy_pac_address <pac_URL>
-proxy_address <proxy_FQDN_or_IP> -proxy_port <port_number>
-use_assignment enable -tag 1000 -string_tag Preproduction
The operations described in this article should only be performed by a Nexthink Engineer or a Nexthink Certified Partner.
If you need help or assistance, please contact your Nexthink Certified Partner. | https://docs-v6.nexthink.com/V6/6.30/Installing-the-Collector-on-macOS.330337339.html | 2022-05-16T18:38:08 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs-v6.nexthink.com |
Built-in Functions → month
month() is a conversion function that returns the month value of the current month.
Signature
month()
Returns
int representing the value of the current month.
Example
Identify the value of the current month, for the given date, 07/05/2021.
month()
The following table illustrates the behavior of the month() conversion function:
Use the following steps for detailed instructions on how to use the month function,
month(), to add the formula to the editor.
- Select Validate & Save.
- In the Properties panel, for Column Label, enter month().
- Name the insight Revenue Per Category.
- In the Action bar, select Save. | https://docs.incorta.com/5.1/references-built-in-functions-conversion-month/ | 2022-05-16T19:16:33 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.incorta.com |
Section: Configure jAcl2.db
− Table of content
Before using jAcl2 API and its "db" driver (or the "dbcache" driver), you have to setup a database and fill it with elements composing rights.
Installation ¶
jAcl2.db driver (or jAcl2.dbcache) requires a database to work. You have to create it with the needed tables and setup a connection profile.
Connection configuration ¶
See the documentation about jDb setup.
If jAcl2 tables are not located in your default db profile, you should setup a
profile called
jacl2_profile, or an alias
jacl2_profile to an
existing profile. An example
profiles.ini.php:
[jdb:default] driver="mysql" database="jelix" host= "localhost" user= "jelix" password= "jelix" persistent= on force_encoding=true [jdb:jacl2_profile] driver="mysql" database="rights" host= "localhost" user= "jelix" password= "xilej" persistent= on force_encoding=true
jAcl2.db tables ¶
To create and initialise tables needed by the driver, you should install the module jacl2db.
# launch the configuration php dev.php module:configure jacl2db
If you want to initialize rights for a first user/group named "admin":
php dev.php module:configure -p defaultuser jacl2db
Then launch
php install/installer.php.
Once created, you can start configuring rights.
Rights configuration ¶
Now you configure jacl2db with its dedicated commands. They are prefixed by
acl2:,
acl2group: or
acl2user:.
In the following examples, with take "myapp" as the name of the application. Change it of course by the name of your application.
Note that you have a module, jacl2db_admin, which allow you to do everything described below with an interface, except the creation of rights.
Rights creation ¶
In jAcl2 rights, you define a right or a possible action on some data.
Imagine a CMS where you want to define rights about articles. You could define right names for some actions like reading, listing, creating, deleting, updating.
Concretely, you would define these rights:
- "cms.articles.read",
- "cms.articles.list",
- "cms.articles.create",
- "cms.articles.delete",
- "cms.articles.update"
Note that all right names here begin with a prefix, allowing to identify precisely what about it is. Using only "read" is not really explicit and may cause conflicts with rights defined for some modules. So in the right name, always add some words indicating what it is about precisely. Your code is then more readable.
Let's start by listing already existing rights:
php console.php acl2:rights-list
You should have an empty list:
+-------------------+----------+--------------------------------------+ | Rights Group | Right id | label key | +-------------------+----------+--------------------------------------+
A right record is a pair of an identifier and a label key. Label keys should be existing locale key identifiers.
Let's create our rights:
php console.php acl2:right-create "cms.articles.create" "cms~acl2.articles.create" php console.php acl2:right-create "cms.articles.update" "cms~acl2.articles.update" php console.php acl2:right-create "cms.articles.delete" "cms~acl2.articles.delete" php console.php acl2:right-create "cms.articles.list" "cms~acl2.articles.list" php console.php acl2:right-create "cms.articles.read" "cms~acl2.articles.read"
If you don't use a module allowing to manage rights with jAcl2 (like jacl2db_admin) then the locale key selector is not required. Just put any string of yours.
If the command fails, you have an error message, else the output is empty.
Now list again the rights:
$ php console.php acl2:rights-list +---------------+---------------------+--------------------------+ | Rights Group | Right id | label key | +---------------+---------------------+--------------------------+ | | cms.articles.create | cms~acl2.articles.create | | | cms.articles.delete | cms~acl2.articles.delete | | | cms.articles.list | cms~acl2.articles.list | | | cms.articles.read | cms~acl2.articles.read | | | cms.articles.update | cms~acl2.articles.update | +---------------+---------------------+--------------------------+
You can delete a right with the following command:
$ php console.php acl2:right-delete <right name>
User group creation ¶
A jAcl2.db right is a combination of a right name and a user group. So you have to
create user groups. Use the
acl2group: commmands type.
Let's create a writers group for our users. You should indicate an key and optionally a label.
$ php console.php acl2group:create "writers" "Writers"
Let's create a second group and make it the default one with
--default.
A default group is a group where every new user will be added to.
$ php console.php acl2group:create --default "readers" "Readers"
You can now list your groups with
acl2group:list:
$ php console.php acl2group:list +---------+---------+---------+ | Id | label | default | +---------+---------+---------+ | readers | Readers | yes | | writers | Writers | | +---------+---------+---------+
You can switch the "default" group with the
acl2group:default command:
$ php console.php acl2group:default readers # or $ php console.php acl2group:default --no-default readers
You can change a group name with
acl2group:name:
$ php console.php acl2group:name writers "Authors"
Or delete a group with
acl2group:delete (it doesn't delete users):
$ php console.php acl2group:delete writers
Managing users into groups ¶
In groups, you should add users. To add a user, you should declare him:
$ php console.php acl2user:register laurent
Note that it doesn't create the user into jAuth, just in jAcl2. A private group is created.
Then you can add him to a group. You should use the command
acl2user:addgroup
bye indicating the group name and the user.
$ php console.php acl2user:addgroup readers laurent
To remove a user from a group:
$ php console.php acl2user:removegroup laurent readers
To see the list of users of a group:
$ php console.php acl2user:list readers
To see the list of all users:
$ php console.php acl2user:list
Rights configuration ¶
You have every needed elements to assign a right. Let's go and execute some
acl2: commands.
You want to add readers the right to read and list articles. Let's associate
rights
cms.articles.list and
cms.articles.read to the readers group:
$ php console.php acl2:add readers "cms.articles.list" $ php console.php acl2:add readers "cms.articles.read"
Check rights list with
cl2:list command:
$ php console.php acl2:list +----------+------------+-------------------+----------+ | Group id | Group name | Right | Resource | +----------+------------+-------------------+----------+ | readers | Readers | cms.articles.list | - | | readers | Readers | cms.articles.read | - | +----------+------------+-------------------+----------+
The value
-for a resource means "no resource". So the indicated right is a right that is applied on any resource.
Now, you want to deal with writers and give them all rights on
cms.articles.
$ php console.php acl2:add writers "cms.articles.list" $ php console.php acl2:add writers "cms.articles.read" $ php console.php acl2:add writers "cms.articles.create" $ php console.php acl2:add writers "cms.articles.delete" $ php console.php acl2:add writers "cms.articles.update"
Again, let's list all rights:
$ php console.php acl2:list +----------+------------+---------------------+----------+ | Group id | Group name | Right | Resource | +----------+------------+---------------------+----------+ | readers | Readers | cms.articles.list | - | | readers | Readers | cms.articles.read | - | | writers | Writers | cms.articles.create | - | | writers | Writers | cms.articles.delete | - | | writers | Writers | cms.articles.list | - | | writers | Writers | cms.articles.read | - | | writers | Writers | cms.articles.update | - | +----------+------------+---------------------+----------+
However in your CMS you have an "advices" article which you want your readers
to edit. You should add the right to update this specific article to readers
group. Let's create a right on the resource "advices" with
acl2:add command:
$ php console.php acl2:add readers "cms.articles.update" "advices"
checking of rights list:
$ php console.php acl2:list +----------+------------+---------------------+----------+ | Group id | Group name | Right | Resource | +----------+------------+---------------------+----------+ | readers | Readers | cms.articles.list | - | | readers | Readers | cms.articles.read | - | | readers | Readers | cms.articles.update | advices | | writers | Writers | cms.articles.create | - | | writers | Writers | cms.articles.delete | - | | writers | Writers | cms.articles.list | - | | writers | Writers | cms.articles.read | - | | writers | Writers | cms.articles.update | - | +----------+------------+---------------------+----------+
You can also remove a right with
acl2:remove, by passing a user group and a right name
similarly to
acl2:add (and optionally a resource if one is involved).
Say you change your mind about the "advices" article, because there is too much crap ;-):
$ php console.php acl2:remove readers "cms.articles.update" "advices"
Once all rights are injected, your application is able to work following your rights rules.
You can change rights with a user interface like the one provided by the jacl2db_admin module. | https://docs.jelix.org/en/manual/components/rights/configuration | 2022-05-16T18:26:43 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.jelix.org |
Running Plesk Behind a Router with NAT
If your Plesk server is behind NAT, you can match private IP addresses on the server to the corresponding public IP addresses via the Plesk interface. one or more public IP addresses to private ones, all A records in the DNS zones of all existing domains will be automatically re-mapped to the corresponding public IP addresses, and A records in the DNS zones of newly created domains will point to the corresponding (public) IP addresses after the domain creation. | https://docs.plesk.com/en-US/onyx/administrator-guide/plesk-administration/running-plesk-behind-a-router-with-nat.64949/ | 2022-05-16T19:00:26 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.plesk.com |
Welcome to TapPay for Apple Pay!
Please register Apple developer account before integrate TapPay.
Register an account and log in to TapPay Portal. Follow the Getting Started section to set up the environment.
Follow the Frontend document and Backend document to set up your environment.
After you are done testing in the Sandbox environment and are ready to go live, submit your application or website in Portal by clicking the little paper airplane.Your production key will be provided upon approval.
If you have any questions, do not hesitate to send us an email: [email protected].
We hope you have as much fun integrating as we have developing.
Previous version Document download
Getting Started
Frontend
Backend | https://docs.tappaysdk.com/apple-pay/en/home.html | 2022-05-16T19:37:51 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.tappaysdk.com |
!
Multi-region deployment
An SD-WAN appliance configured as Master Control Node (MCN) supports multi-region deployment. The MCN manages multiple Regional Control Nodes (RCNs). Each RCN, in turn, manages multiple client sites. The MCN can also be used to manage some of the client sites directly.
With MCN as the control node of the network and RCNs as the control nodes of the regions, SD-WAN can manage up to 6000 sites.
Multi-region deployment enables you to fragment a network into regions and set up a tiered network; such as branch (client) > RCN > MCN.
An MCN with a single region can be configured with a maximum of 550 sites. You can keep the existing sites in the default region and add new regions with RCNs and their sites for multi-region deployment.
The following table provides the list of platforms supported for configuring primary and secondary MCN/RCN.
NOTE
-
The Premium Edition (PE) appliance is formerly known as the Enterprise Edition (EE).
-
Use the Citrix SD-WAN 210 SE appliance as an MCN only in the SD-WAN Orchestrator managed networks.
To configure multi-region deployment for an SD-WAN network:
Navigate to the Global tab in the Configuration Editor. Select Regions. The default region configuration options are displayed.
You can change the name and description for the default region by editing it.
Click + Add to add a new region.
Enter a Name and Description for the region.
Enable Internal. Choose a routing domain.
Enter a Network address. Click Add. The network address is the IP address and mask for the subnet. The newly created region is added to the existing list of regions.
You can select the Default check box to use a desired region as the Default.
Note
You can clone MCN to a GEO or client site.
SD-WAN Center supports multi-region deployment. For more information, see SD-WAN Center Multi-Region Deployment and Reporting.
Change management summary view
When you perform the Change Management process for appliances configured in multi-region deployment, the change management summary table is displayed in the SD-WAN appliance GUI.
The Region column displays a list of regions currently configured in the network. You can view the change management summary for a specific region by selecting it in the summary table.
Default region summary:
Region Summary:
Note
In some instances, the Total Sites value displayed in the Global Multi-Region Summary table is less than the sum of the remaining columns.
For example, when a branch node is not connected, it is possible that the branch is counted twice; once as “Not Connected” and once as “Preparing/Staging.”. | https://docs.citrix.com/en-us/citrix-sd-wan/11-1/use-cases-sd-wan-virtual-routing/multi_region_deployment.html | 2022-05-16T18:22:44 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.citrix.com |
Date: Mon, 16 May 2022 18:49:54 +0000 (GMT) Message-ID: <1445817340.20690.1652726994935@c7585db71e40> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_20689_302398457.1652726994935" ------=_Part_20689_302398457.1652726994935 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
This example describes how to generate random array (list) data = and then to apply the following math functions to your arrays.
LISTSUM- Sum all values in the array. See LISTSUM Function.
LISTMIN- Minimum value of all values in the array. S= ee LISTMIN Function.
LISTMAX- Maximum value of all values in the array. S= ee LISTMAX Function.
LISTAVERAGE- Average value of all values in the arra= y. See LISTAVERAGE Function<= /a>.
LISTVAR- Variance of all values in the array. S= ee LISTVAR Function.
LISTSTDEV- Standard deviation of all values in the&n= bsp;array. See LISTSTDEV Funct= ion.
LISTMODE- Most common value of all values in the&nbs= p;array. See LISTMODE Function<= /a>.
Source:
For this example, you can generate some randomized data using the follow= ing steps. First, you need to seed an array with a range of values using th= e RANGE function:
Then, unpack this array, so you can add a random factor:
Add the randomizing factor. Here, you are adding randomization around in= dividual values: x-1 < x < x+4.
To make the numbers easier to manipulate, you can round them to two deci= mal places:
Renest these columns into an array:
Delete the unused columns:
Your data should look similar to the following:
Transformation:
These steps demonstrate the individual math functions that you can apply= to your list data without unnesting it:
NOTE: The NUMFORMAT function has been wrapped around ea= ch list function to account for any floating-point errors or additional dig= its. | https://docs.trifacta.com/exportword?pageId=184204251 | 2022-05-16T18:49:55 | CC-MAIN-2022-21 | 1652662512229.26 | [] | docs.trifacta.com |
Outdated Version
You are viewing an older version of this section. View current production version.
SingleStore Client Overview
The
singlestore-client package contains is a lightweight client application that allows you to run SQL queries against your database from a terminal window. After you have installed
singlestore-client, use the
singlestore application as you would use the
mysql client to access your database. For more connection options, help is available through
singlestore --help.
singlestore -h <Master-or-Child-Aggregator-host-IP-address> -P <port> -u <user> -p<secure-password> **** Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 12 Server version: 5.5.58 MemSQL source distribution (compatible; MySQL Enterprise & MySQL Commercial). memsql> | https://archived.docs.singlestore.com/v7.1/tools/memsql-client/memsql-client/ | 2022-05-16T18:44:57 | CC-MAIN-2022-21 | 1652662512229.26 | [] | archived.docs.singlestore.com |
Getting Started with OpenEBS
Documentation for OpenEBS v0.5 is no longer actively maintained. The version you are currently viewing is a static snapshot. Click here for the latest version., manageability of volumes and integrated ChatOps experience with Slack
As an Application Developer:As an Application Developer:
Create a PVC specification with the right storage class and use it in the application YAML file. Example PVC spec is shown below CAS, you will observe that new PODs (one volume controller pod and as many volume replica PODs as the number of replicas configured in the storage class) are created | https://v05-docs.openebs.io/ | 2019-09-15T12:05:14 | CC-MAIN-2019-39 | 1568514571360.41 | [] | v05-docs.openebs.io |
Interface Swift_Mime_HeaderFactory
Creates MIME headers.
Public Methods
Method Details
Create a new Date header using $timestamp (UNIX time).
Create a new ID header for Message-ID or Content-ID.
Create a new Mailbox Header with a list of $addresses.
Create a new ParameterizedHeader with $name, $value and $params.
Create a new Path header with an address (path) in it.
Create a new basic text header with $name and $value. | http://docs.phundament.com/4.0-ee/swift_mime_headerfactory.html | 2017-09-19T18:51:10 | CC-MAIN-2017-39 | 1505818685993.12 | [] | docs.phundament.com |
[[word or phrase][wikiPage]]Here are some examples:
- When the text is a wiki word, the link to the wiki word takes precedence, so you cannot override one wiki word to link to another.
- When the link is a wiki word then, as usual, it may include a relative or an absolute path with respect to the parent page using < . > characters, or is a sibling if no path is specified. (See SubWiki[?] for more details.)
- When the link is to an anchor on a different page, append the anchor's name. (See Links Within | http://docs.fitnesse.org/FitNesse.FullReferenceGuide.UserGuide.FitNesseWiki.MarkupLanguageReference.MarkupAliasLink | 2017-09-19T18:44:28 | CC-MAIN-2017-39 | 1505818685993.12 | [] | docs.fitnesse.org |
Registration Deadline: June 20, 2015
A portion of your fee is tax deductible.
Complete this form and mail payment to the address listed below.
Mail and Make Checks Payable to:
STC Alumni Association * 420 West 5th Street #203 * Hastings, NE 68901
Phone: 402‐462‐6566 Fax: 402‐462‐6567
Payment will also be accepted day of the tournament.
Included in Fee: Catered Sack Lunch, Green Fees, Cart, Flag Prizes, Door Prizes, Post Golf Party including free beer, appetizers, and awards.
Post Golf Party: HK Sports Bar & Grill
Sponsored by: STC Alumni Association to Benefit the Hastings Catholic Schools | https://www.docs.google.com/forms/d/e/1FAIpQLSdB5l6wsGOVRi3EzFy3-oISwClPguJsCB9X0Li4rzbtrMPpOg/viewform | 2017-09-19T18:46:43 | CC-MAIN-2017-39 | 1505818685993.12 | [] | www.docs.google.com |
.
Note
Redshift Spectrum queries incur additional charges. The cost of running the sample queries in this tutorial is nominal. For more information about pricing, see Redshift Spectrum Pricing.
Prerequisites
To use Redshift Spectrum, you need an Amazon Redshift cluster and a SQL client that's connected to your cluster so that you can execute SQL commands. The cluster and the data files in Amazon S3 must be in the same.
If you already have a cluster, your cluster needs to be version 1.0.1294 or later to use Amazon Redshift Spectrum. To find the version number for your cluster, run the following command.
select version();
To force your cluster to update to the latest cluster version, adjust your maintenance window.
Steps
To get started using Amazon Redshift Spectrum, follow these steps: | http://docs.aws.amazon.com/redshift/latest/dg/c-getting-started-using-spectrum.html | 2017-09-19T19:06:34 | CC-MAIN-2017-39 | 1505818685993.12 | [] | docs.aws.amazon.com |
Previous page: Import Parent page: Basic FIT fixtures
SummaryFixtureSummaryFixture is a way to display a little bit of extra data on the page. Add a fit.SummaryFixture table at the bottom of your page and it will add a three-row table to the results, giving the standard counts for the page (right, wrong, ignored, and exceptions) as well as the run date and the run elapsed time for the fixture. Not terribly useful by itself, but handy when looking at a build report in order to measure the relative efficiency and running time of each individual page.
Previous page: Import | http://docs.fitnesse.org/FitNesse.FullReferenceGuide.UserGuide.FixtureGallery.BasicFitFixtures.SummaryFixture | 2017-09-19T18:42:46 | CC-MAIN-2017-39 | 1505818685993.12 | [] | docs.fitnesse.org |
Crispy filter lets you render a form or formset using django-crispy-forms elegantly div based fields. Let’s see a usage example:
{% load crispy_forms_tags %} <form method="post" class="uniForm"> {{ my_formset|crispy }} </form>
As handy as the |crispy filter is, think of it as the built-in methods: as_table, as_ul and as_p. You cannot tune up the output. The best way to make your forms crisp is using the {% crispy %} tag. It will change how you do forms in Django. | http://django-crispy-forms.readthedocs.io/en/d-0/filters.html | 2017-09-19T18:39:54 | CC-MAIN-2017-39 | 1505818685993.12 | [] | django-crispy-forms.readthedocs.io |
bool OESpicoliIsLicensed(const char *feature=0, unsigned int *expdate=0)
Determine whether a valid license file is present. This function may be called without a legitimate run-time license to determine whether it is safe to call any of Spicoli’s functionality.
The features argument can be used to check for a valid license to Spicoli along with that feature. For example, to verify that Spicoli can be used from Python:
if (!OESpicoliIsLicensed("python")) OEThrow.Warning("OESpicoli is not licensed for the python feature");
The second argument can be used to get the expiration date of the license. This is an array of size three with the date returned as {day, month, year}. Even if the function returns false due to an expired license, the expdate will show that expiration date. The array full of zeroes implies that no license or date was found.
unsigned int expdate[3]; if (OESpicoliIsLicensed(0, expdate)) { OEThrow.Info("License expires: day: %d month: %d year: %d", expdate[0], expdate[1], expdate[2]); } | https://docs.eyesopen.com/toolkits/cpp/spicolitk/OESpicoliFunctions/OESpicoliIsLicensed.html | 2017-09-19T19:06:13 | CC-MAIN-2017-39 | 1505818685993.12 | [] | docs.eyesopen.com |
vRealize Automation Designer enables you to edit the customizable workflows and update workflows in the Model Manager.
Prerequisites
Launch the vRealize Automation Designer.
Procedure
- Click Load.
- Select the workflow that you want to customize.
- Click OK.
The workflow displays in the Designer pane.
- Customize the workflow by dragging activities from the Toolbox to the Designer pane and configuring their arguments.
- When you are finished editing the workflow, update the workflow in the Model Manager by clicking Send.
The workflow is saved and appears as a new revision in the list the next time you load a workflow. You can access an earlier version of a workflow at any time. See Revert to a Previous Revision of a Workflow. | https://docs.vmware.com/en/vRealize-Automation/7.3/com.vmware.vra.prepare.use.doc/GUID-B9A6940A-1349-45E2-8A8B-D3ACFCF5A1BF.html | 2017-09-19T19:11:11 | CC-MAIN-2017-39 | 1505818685993.12 | [] | docs.vmware.com |
Uploading a File
This article or section is in the process of an expansion or major restructuring. You are welcome to assist in its construction by editing it as well. If this article or section has not been edited in several days, please remove this template.
This page was last edited by Tom Hutchison (talk| contribs) 2 years ago. (Purge)
Use Special:Upload to upload files, to view or search previously uploaded images go to the list of uploaded files, uploads and deletions are also logged in the upload log.
Advertisement | https://docs.joomla.org/JHelp:Uploading_a_File | 2017-09-19T18:57:02 | CC-MAIN-2017-39 | 1505818685993.12 | [] | docs.joomla.org |
Documentation¶
Jmbo is a lightweight and unobtrusive CMS geared towards the publishing industry. Jmbo introduces an extensible base model and provides segmented templates that are easy to customize.
This documentation covers version 2.0.6 of |jmbo|.
Note
|jmbo| 2.0.6 requires Django 1.6. More on Versions and Compatibility.
Overview¶.
Installation¶
Jmbo itself is just a Django product and can be installed as such. An easier approach is to follow to get a fully working environment. | http://jmbo.readthedocs.io/en/latest/ | 2017-09-19T18:36:12 | CC-MAIN-2017-39 | 1505818685993.12 | [] | jmbo.readthedocs.io |
New in version 3.4.0.
In analogy (in the module filebrowser.actions).
Custom actions are simple functions of the form:
def foo(request, fileobjects): # Do something with the fileobjects
the first parameter is an HttpRequest object (representing the submitted form in which a user selected the action) and the second parameter is a list of FileObjects to which the action should be applied.
In the current FileBrowser version, the list contains exactly one instance of FileObject (representing the file from the detail view), but this may change in the future, as custom actions may become available also in browse views (similar to admin actions applied to a list of checked objects).
In order to make your action visible, you need to register it at a FileBrowser site (see also FileBrowser Sites): any short description for your action, the function name will be used instead and FileBrowser will replace any underscores in the function name with spaces.
Each custom action can be associated with a specific file type (e.g., images, audio file, etc) to which it applies. In order to do that, you need to define a predicate/filter function, which takes a single argument – a.
You can provide a feedback to a user about or successful or failed execution of an action by registering a message at the request object. an HttpResponse object from your action. Good practice for intermediate pages is to implement a confirm view and have your action return an HttpResponseRedirect object redirecting a user to that view:
def crop_image(request, fileobjects): files = '&f='.join([f.path_relative for f in fileobjects]) return HttpResponseRedirect('/confirm/?action=crop_image&f=%s' % files) | https://django-filebrowser.readthedocs.io/en/3.5.2/actions.html | 2017-09-19T18:44:00 | CC-MAIN-2017-39 | 1505818685993.12 | [] | django-filebrowser.readthedocs.io |
. There are various aspects to data privacy. First, you can segregate your network into channels, where each channel represents a subset of participants that are authorized to see the data for the chaincodes that are deployed to that channel. Second, within a channel you can restrict the input data to chaincode to the set of endorsers only, by using visibility settings. The visibility setting will determine whether input and output chaincode data is included in the submitted transaction, versus just output data. Third, you?
A. No, the orderers only order transactions, they do not open the transactions. If you do not want the data to go through the orderers at all, and you are only concerned about the input data, then you can use visibility settings. The visibility setting will determine whether input and output chaincode data is included in the submitted transaction, versus just output data. Therefore, the input data can be private to the endorsers only. If you do not want the orderers to see chaincode output, then you can hash or encrypt the data before calling chaincode. If you hash the data then you will need to provide a meansto share the source data. If you encrypt the data then you will need to provide a means to share the decryption keys.
Application-side Programming Model¶
Transaction execution result:
- How do application clients know the outcome of a transaction?
A..
Ledger queries:
- How do I query the ledger data?
A.?
A.?
A.?
A. Chaincode can be written in any programming language and executed in containers. The first fully supported chaincode language is Golang.
Support for additional languages and the development of a templating language have been discussed, and more details will be released in the near future.
It is also possible to build Hyperledger Fabric applications using Hyperledger Composer.
- Does the Hyperledger Fabric have native currency?
A.. | http://hyperledger-fabric.readthedocs.io/en/latest/Fabric-FAQ.html | 2017-09-19T18:41:17 | CC-MAIN-2017-39 | 1505818685993.12 | [] | hyperledger-fabric.readthedocs.io |
Difference between revisions of "Users User Notes Edit"
From Joomla! Documentation
Revision as of 19:28, 30 November 2013
Edit a note or comment on a specific user
Contents
Details
- Subject: The subject line for the note
- Category: (Uncategorised). The category that this item is assigned to.
- Status: (Published/Unpublished/Archived/Trashed). Set publication status.
- Review time: Review time is a manually entered time you can use as fits in your workflow. Examples would be to put in a date that you want to review a user or the last date you reviewed the user
- Version Note: Enter an optional note for this version of the item.
- Note:
Toolbar
At the top left you will see the toolbar for a Edit Item or New Item
Notes (edit>
Notes (add. | https://docs.joomla.org/index.php?title=Help33:Users_User_Notes_Edit&diff=106048&oldid=105356 | 2015-06-30T09:16:09 | CC-MAIN-2015-27 | 1435375091925.14 | [] | docs.joomla.org |
Difference between revisions of "Known upgrading issues and solutions"
From Joomla! Documentation
Revision as of 22:39, 29) 18 months ago. (Purge)
This page lists334 | Help34. | https://docs.joomla.org/index.php?title=J3.2:Known_upgrading_issues_and_solutions&diff=105910&oldid=102231 | 2015-06-30T08:22:30 | CC-MAIN-2015-27 | 1435375091925.14 | [] | docs.joomla.org |
Setting up your workstation for Joomla.
- Subversion
- Zend debugger on Eclipse
- ...
Installing a build mechanism
- Ant
- Phing
- ... | https://docs.joomla.org/index.php?title=Setting_up_your_workstation_for_Joomla_development&oldid=71176 | 2015-06-30T09:15:27 | CC-MAIN-2015-27 | 1435375091925.14 | [] | docs.joomla.org |
Difference between revisions of "Who runs Open Source Matters?"
From Joomla! Documentation
Latest revision as of 16:25, 1 September 2012
OSM is run by a board of directors. You can find out more about the current OSM board members by visiting the Open Source Matters website or by clicking here:. | https://docs.joomla.org/index.php?title=Who_runs_Open_Source_Matters%3F&diff=73656&oldid=11926 | 2015-06-30T09:42:12 | CC-MAIN-2015-27 | 1435375091925.14 | [] | docs.joomla.org |
Information for "Favicon" Basic information Display titleFavicon Default sort keyFavicon Page length (in bytes)1,002 Page ID296 creatorMroswell (Talk | contribs) Date of page creation23:40, 18 January 2008 Latest editorMATsxm (Talk | contribs) Date of latest edit18:19, 2 December 2014 Total number of edits29 Total number of distinct authors7 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ‘’ | https://docs.joomla.org/index.php?title=Favicon&action=info | 2015-06-30T08:43:19 | CC-MAIN-2015-27 | 1435375091925.14 | [] | docs.joomla.org |
Difference between revisions of "What's new in Joomla 2.5"
From Joomla! Documentation
Revision as of 19:17, 2 February 2012
The principal changes introduced in Joomla 1.6 are:
-.
The following is a more complete list of changes.
Contents
- 1 Administrators
- 1.1 Introduction
- 1.2 Administrator
- 1.3 Module Enhancements
- 1.4 New Templates
- 1.5 SEO Improvements
- 1.6 Global Configuration
- 1.7 User Manager
- 1.8 Media Manager
- 1.9 Menu Manager
- 1.10 Articles Manager
- 1.11 Categories
- 1.12 Banners
- 1.13 Extensions Manager
- 1.14 Templates Manager
- 1.15 Misc
- 2 Access Controls
- 3 Developers
- 3.1 Introduction
- 3.2 Removed features
- 3.3 Deprecated Features
- 3.4 Database
- 3.5 Improved MVC
- 3.6 Form API
- 3.7 Translation and Language Support
- 3.8 Extension management
- 3.9 Events
- 3.10 Categories
- 3.11 Access Controls
- 3.12 Debug Plugin
- 3.13 JavaScript
- 4 Cache changes
- 5 Performance
- 6 Known Issues
- 7 Notes
- 8 Infographic
Administrators
Introduction
- Consistent UI
- Consistent Features
- Richer sample data
- IE 7+, Firefox 3.x, Safari 4.x
- PHP 5.2.4+
- MySQL 5.0.4 (allows for wide varchars)
Administrator
- Menu Changes
- Submenu consistency
Toolbar Features
- Save
- Save & Close
- Save & New
- Save as Copy
- Expired session will return to the page you were on when you got logged out (can be hit and miss)
- Most search filters allow you to search for a record id via [ id:123 ]
- "Parameters" are now referred to as "Options"
- Template Styles
- Integrated Trash Management
- Consistent archive support for most content
- Extension Installer Improvements
Module Enhancements
- Publish up and down
- Add option to display on all pages "except" selected
- Expanded Category System
- 404 Page Redirection
- Better Menu Management
- Alternative layouts for content, modules and menus (taken from the home template)
New Templates
- Atomic
- Beez2
- (Administrator) Bluestork (replaces Khephri)
- (Administrator) Hathor
- Legacy layer in Milkyway
- Backend supports layout overrides
- New Modules
- New Plugins
- Content Languages
- User Login Permissions
- Activate selected users from the user list now (and filter)
- Administrator registration approval
- Polls component removed
- New codemirror editor
SEO Improvements
- Meta decription and keywords for categories
- Articles can change the page title and page header separately
Global Configuration
- Add site name to titles
- Default Access Level
- Set Metadata Language (buggy)
- Unicode Aliases
- Cookie domain and path
- User Setting moved to User Manager -> Options
- Media Settings moved to Media Manager -> Options
- Server Timezone now a location, not an integer offset
- New Global Permissions tab
User Manager
- Can activate a user from the list now
- User can be assigned to multiple groups
- Manage user groups
- Manager content access levels
Media Manager
- Flash uploader fixed
Menu Manager
Menus List
- Rebuild button to press when you brick the menu
- Clicking the menu name brings up the menu items list rather than going into Edit Menu. To edit the menu, click on the check box next to the name and click on the Edit icon in the toolbar.
Items List
- Menus support the language filter
- Default now called Home
- Home is now clickable in the menu item list
- A separate home can be set for different languages
New batch operations
- Set access level
- Copy or move to another part of this or another menu
Edit Item
- Improved "Type" selector with human readable view and layout names
- Note field added
- New window target
- New Language assignment
- New Template style
- Ability to add & edit Module assignments from this page
- Ability to create alternative menu items with customised parameters
- Ability to create multiple alternative layouts for one template
- Link title attribute
- Link CSS style
- Menu image is changed to a modal selector
- CSS class for page heading
- Page meta description
- Page meta keywords
- Robots options
Articles Manager
- Frontpage is now referred to as Featured
- Article manager uses submenu to quickly skip between articles , categories and featured
- Sections and categories are now merged.
Articles List
- "Missing move and copy; filter by author"
- New column to show language
- Filtering by language available
Article Edit
- Created by user now selected by modal popup
- New ability to set the page title from the article
- Define create, delete, edit and publishing permissions
Archived Articles
- In 1.5, Archived Articles had to first be changed to Published or Unpublished before update.
- In 1.6, an Article with an Archived Status *can* be changed without changing the State first.
Categories
Category List
- Nested view
- Filtering on language
Edit Category
- New note field
- Section replaced with ability to assign a parent category
- Ability to assign content language
New Options (not previously available in 1.5)
- Assign alternate layout
- Define create, delete, edit and publishing permissions
- Meta description
- Meta keywords
- Alternative page title
- Meta author
- Meta robots
Banners
Banners list
- Missing copy toolbar button
- New archive toolbar button
- New columns to show meta keywords, purchase type and language
- New filtering by client and language
Edit Banner
- New type toggle for Image or Custom (dynamically changes the available form fields)
- New alt text field for image
- New language field
New Options
- Ability to set the created date
- Ability to set start and finish publishing times
- Ability to set the purchase type
- Ability to track impressions
- Ability to track clicks
- Use own prefix ?
- Tags renamed to meta
- Contacts
- Messaging
- Newsfeeds
- Search
- Weblinks
- Redirect
Extensions Manager
Discover
- Module mgr
- Plugin Mgr
- Template Mgr
- Language mgr
Templates Manager
- Options - allows you to enable/disable the tp=1 feature
Misc
- Auto create linked contact when creating new user ??
Access Controls
- Introduction
- User Groups
- Access Levels
- Permission Layers
- How Permissions are Inherited
- How to debrick your site
Developers
Introduction
- PHP 5.2.4+
- MySQL 5.0.4 (allows for wide varchars)
- IE7+, Firefox 3+, Safari 4+
- Focus on code consistency
- Focus on code reduction
Usage of PHP Native Functions where possible, for example:
- JXMLElement extends the native SimpleXML class
- JDate extends the DateTime class
- Native INI parser to load languages
Removed features
- ADODB compatibility methods in database classes
- DOMIT (unsupported XML library)
- Legacy mode (includes global $mainframe, etc)
- JTemplate (based on patTemplate)
- patTemplate (templating engine)
- PDF support
- PEAR libraries (due to license incompatibilities)
- phpgacl
- PHP 4.0 and 5.0 compatibility files
- XStandard Editor
Deprecated Features
- JController::_acoSection
- JController::_acoSectionValue
- JController::authorize()
- JController::setAccessControl()
- JDatabase::stderr()
- JDate::offest
- JDate::setOffset()
- JDate::toFormat() - use JDate::format() instead
- JException::toString()
- JFactory::getXMLParser()
- JHtmlBehavior::mootools() - use JHtmlBehavior::framework instead
- JHtmlGird::access()
- JHtmlImage::administrator - use JHtml::image instead
- JHtmlImage::site - use JHtml::image instead
- JHtmlList::accesslevel()
- JHtmlList::specificordering() - use JHtml::_('list.ordering')
- JHtmlList::category()
- JHtmlSelect::optgroup() - see JHtmlSelect::groupedList()
- JLanguage::_parseLanguageFiles - renamed to parseLanguageFiles
- JLanguage::_parseXMLLanguageFile - renamed to parseXMLLanguageFile
- JLanguage::_parseXMLLanguageFiles - renamed to parseXMLLanguageFiles
- JObject::toString - replaced with magic method
- JRegistry::getNameSpaces()
- JRegistry::getValue()
- JRegistry::makeNameSpace()
- JRegistry::setValue()
- JPane - See JHtmlSliders
- JParameter - replaced by JForm
- JSimpleXML, JSimpleXMLElement - Use JXMLElement instead, based on the native SimpleXMLElement
- JTable::canDelete() - models or controllers should be doing the access checks
- JTable::toXML()
- JToolbarHelper customX(), addNewX(), editListX(), editHtmlX(), editCssX(), deleteListX()
- JUser::authorize() - Use JUser::authorise()
- JUtility::dump()
- JUtility::array_unshift_ref() - Not needed in PHP 5
- JUtility::getHash() - Use JApplication::getHash()
- JUtility::getToken() - Use JFactory::getSession()->getFormToken()
- JUtility::isWinOS() - Use JApplication::isWinOS()
- JUtility::return_bytes() - See InstallerModelWarnings::return_bytes()
- JUtility::sendMail() - Use JFactory::getMailer()->sendMail()
- JUtility::sendAdminMail() - Use JFactory::getMailer()->sendMail()
- JXMLElement::data() - Provided for backward compatibility
- JXMLElement::getAttribute() - Provided for backward compatibility
Database
- JTable now automatically looks up the fields from the database schema
- New JDatabaseQuery - A chained CRUD query builder
- New JDatabase::getNextRow
- New JDatabase::getNextObject
- JDatabase::loadAssocList - Now takes a second argument to just return the value of a column
- JDatabase::setQuery - Added chaining support
Important Schema Changes
- New jos_extensions table to list all extensions
- Components table information moved and split between jos_extensions and jos_menu (special menu called _adminmenu)
The old phpgacl (jos_core_acl*) and jos_groups tables have been reworked into:
- jos_assets
- jos_user_usergroup_map
- jos_usergroups
- jos_viewlevels
- Archived state changed from a value of -1 to +2
Improved MVC
Models
- JModelList
- JModelForm
- JModelAdmin
- Format handling (eg JSON)
- Sub-controller handling
Controllers
- JControllerForm
- JControllerAdmin
- JController::setMessage takes second arg to set the message type
- JController can set the default view
- Added chaining support to several JController methods
Views
- Semantic core output
- Milkyway legacy layouts
- The old component parameters are not automatically added to the menu anymore. You need to explicitly put them in the layout XML files.
- The menu manager will now detect additional layouts for a given view in the default template.
Form API
Event manipulation
- onContentPrepareForm
- onContentPrepareData
Translation and Language Support
See International Enhancements for Version 1.6 for a more complete list of changes in internationalisation features.
- Support for unicode slugs, eg, SEF URL's with Greek characters
- 3-letter languages now supported, xxx-XX
- All existing language keys have been refactored
INI files must validate
- Upper case key with no spaces, alphanumeric characters and underscores
- Quoted values
- Double quotes within literal strings must use _QQ_ in the formClick</a>"
Javascript translation layer
- See the flash uploader script for an example
Local extension language files
Language file API
- Pluralisation support
- Transliteration support for ASCII or Unicode slugs
- Ignore Search Words
- Minimum search word length
- Custom language overrides
- System language file to support administrator menu and installation (.sys.ini)
Language switcher
- Language Filter plugin enables language switching
- Sets the automatic filtering via JFactory::getApplication()->setLanguageFilter(true)
- A frontend component with language support would test JFactory::getApplication()->getLanguageFilter(), which returns the selected language code from the Languages Module
- The language field can be a language code for a single language, or "*" to be displayed for all languages
- Community extension for language maintenance com_localise
Extension management
See /tests/_data/installer_packages/ for complete examples of all extensions and manifests.
New installation types
Libraries
- Must include an XML manifest where type="library"
- Can only be installed into a sub-folder of /libraries/
- Can extend parts of an existing library, eg /libraries/joomla/database/database/oracle.php
Packages multi-installer
- Must include an XML manifest where type="package"
- A package is a zip of zip's
New installation script
All install / uninstall scripts should be created as methods in new installation script. The script file name is set using the <scriptfile> tag in the extension manifest file. This new install script can be provided with 5 methods:
preflight
- Runs before anything is run and while the extracted files are in the uploaded temp folder
Could allow for:
- secondary extraction of custom zip's
- version checks to be performed
- halting the installer on an error
Note: preflight is run for install, update and discover_install, but not for uninstall
install / update
- Runs after the database scripts are executed
- If the extension is new, the install method is run
- If the extension exists then update method is run if method="upgrade", otherwise assumes that the extension is not meant to be upgradable
uninstall
- Runs before any other action is taken (files removal or database processing)
postflight
- Runs after the extension is registered in the database
- Is not run for the uninstall process (nothing left to do obviously)
Discover
- Does not do any file copying, only works with what it finds
- Performs preflight, install and postflight
- Developer of installer has two language files??
Update site
- Can publish an XML manifest on your site that can include individual extensions and extension sets.
XML Manifest Changes
- <install> is deprecated - use <extension>
- New <update> tag. Takes a <schemas> tag which can define <schemapath>
- <params> and <param> tags are deprecated, use <fields>, <fieldsets> and <field> instead
File changes
- Installation manifest must be the same name as the extension, eg com_foobar/foobar.xml This helps with discovery (otherwise the function has to go through all the files in the extension folder
- Plugins are now in folders like modules and components
- See SVN/tests/_data/installer_packages/ for complete examples of all extensions and manifests.
- The method="upgrade" will compare individual files in the original and incoming manifests and will remove files as appropriate. However, it will not remove differences in the <folder> tags.
- Future support for rollback
Events
New Events
- onBeforeCompileHead
- onBeforeRender
- onContentBefore
- onContentAfter
- onContentChangeState
- onContentPrepare
- onContentPrepareData
- onContentPrepareForm
- onExtensionBeforeInstall
- onExtensionBeforeUpdate
- onExtensionBeforeUninstall
- onExtensionAfterInstall
- onExtensionAfterUpdate
- onExtensionAfterUninstall.
Categories
- Component can provide custom options for its own categories via optional category.xml
- Supported via JTableNested
Access Controls
- A thing that can be controlled by permissions is registered in the assets table
- JTable handles this transparently via asset_id field
- For view permissions, support is as simple as adding
$user = JFactory::getUser(); $groups = implode(',', $user->authorisedLevels()); $query->where('a.access IN (' . $groups . ')');
For action permissions, same format as in 1.5:
$user->authorise($actionName, $assetName)
- OpenID library has been removed
- Geshi library moved to plugin folder
- JRegistry notes defaults to JSON (new format), dynamically converting existing data in INI format
- New JStream
- New JApplicationHelper::getComponentName
- Core icons moved to /media/
- Backward incompatible change to JEditor::display
- Added chaining support to JMail
- JFilterInput can no longer be called statically
- JHtml::image now supports relative paths
- All system images are overridable in the default template
- New JHtmlString
- Added wincache session handler for IIS
- New JFilterOutput::stripImages
- JPath::check takes second arg for separator (to pass to JPath::clean)
- Expanded configuration support through config.xml (multiple tabs)
Core actions
- core.login.site
- core.login.admin
- core.admin
- core.manage
- core.create
- core.edit
- core.edit.state
- edit.own
- core.delete
- Miscellaneous changes
Debug Plugin
- More tools for assisting with translation
JavaScript.
Deprecated
- submitform() - use Joomla.submitform() instead
- submitbutton() - use Joomla.submitbutton() instead
Mootools 1.3
Mootools has been upgraded to Version 1.3 including the compatibility layer. Code that works with Mootools 1.2 should continue to work just fine.
Cache changes
API changes relevant to 3rd party developers
Component view cache.
Module cache:
- static - one cache file for all pages with the same module parameters
- oldstatic - 1.5. definition of module caching, one cache file for all pages with the same module id and user aid. Default for backwards compatibility
- itemid - changes on itemid change:
- safeuri - id is created from $cacheparams->modeparams array, the same as in component view cache
- id - module sets own cache id's);
Functional changes.
CMS and framework level functional changes
Caching is implemented in all components and modules that can potentially gain from cache. Caching has also been added to some most expensive and frequent framework calls JComponentHelper::_load(), JModuleHelper::_load(),JMenuSite::load();
Cache library changes
Cache library has been completely refactored.
- Cache handlers have been renamed to controllers to better reflect their role and avoid confusion with cache storage handlers (referred to as drivers in following text).
- New JCacheController parent class has been added and inheritance has been changed to prevent bugs occurring from controller's and storage handler's get method clashes.
JCache
- getAll() method was added to JCache, JCacheStorage and all drivers, and it returns all cached items (this was previously possible only with file driver and hardcoded in administration)
- New lock and unlock methods were added to JCache, JCacheStorage and drivers. They enable cache item locking and unlocking to prevent errors on parallel accesses and double saves. This functionally was also implemented in controllers.
- Workarounds are now consolidated in new JCache getWorkarounds and setWorkarounds methods, are now used by all controllers and their use has been made optional.
- New makeId() method in JCache that creates cache id from registered url parameters set by components and system plugins
JCacheController.
JCacheStorage
- _getCacheId method was moved from drivers to their parent JCacheStorage and all drives now use the same method by default
- CacheItem was moved from cache admin to framework JCacheStorageHelper, it is used by getAll
- There are new cachelite and wincache drivers. All other drivers have been fixed with missing functions (gc, clean) added, their code cleaned and tested they should be now working properly.
- Replaced separate _expire files in filecache driver with timestamps (this should amount to cca. 40% speed gain). The same in all drivers that had this.
- Numerous bugfixes on all levels, most important is proper use of options defaulting to configuration parameters settings and correctly passing from level to level.
Other framework level changes
- Safe url parameters registration added to JControler view method.
- New ModuleCache method in JModuleHelper that performs the above described module cache (in 5 modes) for both, modules and module renderer.
- JFactory::getFeedParser has been changed to use Joomla caching instead of simplepie's.
Performance
Eliminate the usage of JTable for general browsing on the frontend
- Session drops the usage of JTable
- Item views use a dedicated question, not JTable->load
Known Issues
- Scaling issues to address
Notes
Releases
- Alpha 1: 22 June 2009
- Alpha 2: 25 October 2009
- Beta 01: 18 May 2010
- Beta 02: 31 May 2010
- Beta 03: 14 June 2010
- Beta 04: 28 June 2010
- Beta 05: 12 July 2010
- Beta 06: 26 July 2010
- Beta 07: 09 August 2010
- Beta 08: 23 August 2010
- Beta 09: 06 September 2010
- Beta 10: 20 September 2010
- Beta 11: 03 October 2010
- Beta 12: 17 October 2010
- Beta 13: 31 October 2010
- Beta 14: 14 November 2010
- Beta 15: 29 November 2010
- Release Candidate 1: 13 December 2010
- Version 1.6.0 released 10 January 2011
Infographic
There's an official Joomla 1.6 infographic available, which shows 10 of the new functions with screenshots. | https://docs.joomla.org/index.php?title=What's_new_in_Joomla_1.6&diff=64750&oldid=34179 | 2015-06-30T09:39:25 | CC-MAIN-2015-27 | 1435375091925.14 | [] | docs.joomla.org |
Difference between revisions of "How to create a custom button"
From Joomla! Documentation
Revision as of 12:02, 18 June 2008
This Namespace has been archived - Please Do Not Edit or Create Pages in this namespace. Pages contain information for a Joomla! version which is no longer supported. It exists only as a historical reference, will not be improved and its content may be incomplete.
The scripts below are used for using the back end toolbar when developing components or modules.
Put this script on your component or module code to make a custom button:
JToolBarHelper::custom("task", "icon", "icon over", "alt", boolean, boolean);
For a back button you can use this one:
JToolBarHelper::back();
Calcel button:
JToolBarHelper::cancel();
Apply button:
JToolBarHelper::apply();
Upload Button
You can put upload button in two ways:
- First:
// Add an upload button and view a popup screen large 550 and height 400 $alt = "Upload"; $bar=& JToolBar::getInstance("toolbar"); $bar->appendButton( "Popup", "upload", $alt, "index.php", 550, 400 );
- Second:
// You view button for popup media manager tools JToolBarHelper::media_manager('/'); | https://docs.joomla.org/index.php?title=J1.5:How_to_create_a_custom_button&diff=7772&oldid=7771 | 2015-06-30T09:28:18 | CC-MAIN-2015-27 | 1435375091925.14 | [] | docs.joomla.org |
Difference between revisions of "List of Joomla! 1.5 core CSS classes"
From Joomla! Documentation
Latest revision as of 11:37, following table lists the core CSS classes and the core files that either generate or use these classes in Joomla 1.5. The list was generated using the Linux grep command to do a search of the core files for the pattern "class=". | https://docs.joomla.org/index.php?title=J1.5:List_of_Joomla!_1.5_core_CSS_classes&diff=prev&oldid=86040 | 2015-06-30T09:21:05 | CC-MAIN-2015-27 | 1435375091925.14 | [] | docs.joomla.org |
Difference between revisions of "Coding style and standards" From Joomla! Documentation Revision as of 01:52, 16 October 2008 (view source)Masterchief (Talk | contribs) (Major revision bringing in new standard for 1.6)← Older edit Latest revision as of 23:11, 8 January 2014 (view source) Tom Hutchison (Talk | contribs) m (rollback and improve) (46 intermediate revisions by 21== File Format ==+* [ Coding standards auto formatter settings for Eclipse, PHPStorm, Zend Studio] −All files contributed to Joomla must:+The older document is kept here for historical reference.'' 75-85 characters is a good target. Use your judgement based on the nature of the line and readability. Line lengths in mixed HTML+PHP code will often be blow out so that the generated source remains reabable.+=== Comment Blocks ===+ − + −All source code files in the core PEAR distribution should>+Example = | https://docs.joomla.org/index.php?title=Coding_style_and_standards&diff=106745&oldid=11158 | 2015-06-30T09:08:25 | CC-MAIN-2015-27 | 1435375091925.14 | [] | docs.joomla.org |
Difference between revisions of "Category parameter type"
From Joomla! Documentation
Revision as of 17:30, 21. ]]]
-” | https://docs.joomla.org/index.php?title=J1.5:Category_parameter_type&diff=10300&oldid=10152 | 2015-06-30T08:37:08 | CC-MAIN-2015-27 | 1435375091925.14 | [] | docs.joomla.org |
With World storefront. You might also be able to download apps from a webpage (try visiting mobile.blackberry.com from your BlackBerry smartphone), or through your wireless service provider. Data charges might apply when you add or use an app over the wireless network. For more information, contact your wireless service provider. | http://docs.blackberry.com/en/smartphone_users/deliverables/44780/1440857.html | 2015-06-30T08:49:22 | CC-MAIN-2015-27 | 1435375091925.14 | [] | docs.blackberry.com |
Getting Started
Most of the samples in this book are part of the Samples-BI sample (
Opens in a new window). InterSystems recommends that you create a dedicated namespace called SAMPLES (for example) and load samples into that namespace. For the general process, see Downloading Samples for Use with InterSystems IRIS®.
One part of the sample is the BI.Study.Patient class and related classes. This sample is meant for use as the basis of a Business Intelligence model. It does not initially contain any data. The BI.Model package includes sample cubes, subject areas, KPIs, pivot tables, and dashboards, for use as reference during this tutorial.
This sample is intended as a flexible starting point for working with Business Intelligence. You use this sample to generate as much data or as little data as needed, and then you use the Architect to create a Business Intelligence model that explores this data. You can then create Business Intelligence pivot tables, KPIs, and dashboards based on this model. The sample contains enough complexity to enable you to use the central Business Intelligence features and to test many typical real-life scenarios. This book presents hands-on exercises that use this sample.
The system uses SQL to access data while building the cube, and also when executing detail listings. If your model refers to any class properties that are SQL reserved words, you must enable support for delimited identifiers so that Business Intelligence can escape the property names. For a list of reserved words, see the “Reserved Words” section in the InterSystems SQL Reference. For information on enabling support for delimited identifiers, see the chapter “Identifiers” in Using InterSystems SQL.
Be sure to consult the online InterSystems Supported Platforms document for this release for information on system requirements for Business Intelligence.
Getting Started
Most of the tools that you will use are contained in the Management Portal.
To log on:
Click the InterSystems Launcher and then click Management Portal.
Depending on your security, you may be prompted to log in with an InterSystems IRIS username and password.
Switch to the namespace into which you loaded the samples, as follows:
Click Switch.
Click the namespace name.
Click OK.
Regenerating Data
The tutorial uses a larger, slightly more complex set of data than is initially provided in the Samples-BI sample.
To generate data for this tutorial:
In the Terminal, switch to the namespace into which you installed the samples, for example:
set $namespace="SAMPLES"Copy code to clipboard
Execute the following command:
do ##class(BI.Populate).GenerateData namespace into which you installed the samples, as described earlier.
Click System Explorer > SQL.
Click the Execute Query tab.
Click Actions and then Tune all tables in schema.
The system then displays a dialog box where you select a schema and confirm the action.
For Schema, select the BI_Study schema.
Click Finish.
Click Done.
The system then runs the Tune Table facility in the background. | https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=D2DT_CH_SETUP | 2021-06-12T21:16:57 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.intersystems.com |
EMCMD <system> ISPOTENTIALMIRRORVOL <volume letter>
This command checks to determine if a volume is a candidate for mirroring. The command may only be run on the local system.
The parameters are:
Output:
TRUE – The volume is available for mirroring.
Otherwise, the output may be some combination of the following:
System Drive
RAW filesystem
FAT filesystem
ACTIVE partition
Contains PageFile
GetDriveType not DRIVE_FIXED
Contains DataKeeper bitmap files
If the drive letter points to a newly created volume (i.e. SIOS DataKeeper driver not attached yet), or a non-disk (network share, CD-ROM), the output will be:
Unable to open – SIOS DataKeeper driver might not be attached (you may need to reboot) or this might not be a valid hard disk volume.
If there is an internal error getting volume information, you may see the message:
Unable to retrieve the volume information for use in determining the potential use as a mirrored volume. The volume may be locked by another process or may not be formatted as NTFS.
このトピックへフィードバック | https://docs.us.sios.com/dkce/8.6.5/ja/topic/ispotentialmirrorvol | 2021-06-12T20:16:18 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.us.sios.com |
% net
章 31. 進階網路設定
31.1. 概述
This chapter covers a number of advanced networking topics.
讀完這章,您將了解:
The basics of gateways and routes.
How to set up USB tethering.
How to set up IEEE™ 802.11 and Bluetooth™.
如何在 FreeBSD 上設定多個 VLAN。
Configure bluetooth headset.
在開始閱讀這章之前,您需要:
Understand the basics of the /etc/rc scripts.
熟悉基本網路術語。
Know how to configure and install a new FreeBSD kernel (設定 FreeBSD 核心).
了解如何安裝其他第三方軟體 (安裝應用程式:套件與 Port)。
31.2. 通訊閘與路由
Routing is the mechanism that allows a system a routing table.
This section provides an overview of routing basics. It then demonstrates how to configure a FreeBSD system as a router and offers some troubleshooting tips.
31.2.1. 路由基礎概念
To view the routing table of a FreeBSD system, use netstat(1):
The entries in this example are as follows:
- default
The first route in this table specifies the
defaultroute. When the local system needs to make a connection to a remote host, it checks the routing table to determine if a known path exists. If the remote host matches an entry in the table, the system checks to see if it can connect using the interface specified in that entry.
If the destination does not match an entry, or if all known paths fail, the system uses the entry for the default route. For hosts on a local area network, the
Gatewayfield in the default route is set to the system which has a direct connection to the Internet. When reading this entry, verify that the
Flagscolumn indicates that the gateway is usable (
UG).
The default route for a machine which itself is functioning as the gateway to the outside world will be the gateway machine at the Internet Service Provider (ISP).
- localhost
The second route is the
localhostroute. The interface specified in the
Netifcolumn for
localhostis lo0, also known as the loopback device. This indicates that all traffic for this destination should be internal, rather than sending it out over the network.
- MAC address
The addresses beginning with
0:e0:are MAC addresses. FreeBSD will automatically identify any hosts,
test0in the example, on the local Ethernet and add a route for that host over the Ethernet interface, re0. This type of route has a timeout, seen in the
Expirecolumn, which is used if the host does not respond in a specific amount of time. When this happens, the route to this host will be automatically deleted. These hosts are identified using the Routing Information Protocol (RIP), which calculates routes to local hosts based upon a shortest path determination.
- subnet
FreeBSD will automatically add subnet routes for the local subnet. In this example,
10.20.30.255is the broadcast address for the subnet
10.20.30and
example.comis the domain name associated with that subnet. The designation
link#1refers to the first Ethernet card in the machine.
- host
The
host1line refers to the host by its Ethernet address. Since it is the sending host, FreeBSD knows to use the loopback interface (lo0) rather than the Ethernet interface.
The two
host2lines represent aliases which were created using ifconfig(8). The
⇒symbol after the lo0 interface says that an alias has been set in addition to the loopback address. Such routes only show up on the host that supports the alias and all other hosts on the local network will have a
link#1line for such routes.
- 224
The final line (destination subnet
224) deals with multicasting.
Various attributes of each route can be seen in the
Flags column. 常見路由表標記 summarizes some of these flags and their meanings:
On a FreeBSD system, the default route can defined in /etc/rc.conf by specifying the IP address of the default gateway:
defaultrouter="10.20.30.1"
It is also possible to manually add the route using
route:
# route add default 10.20.30.1
31.2.2. 設定路由器使用靜態路由
A FreeBSD system can be configured as the default gateway, or router, for a network if it is a dual-homed system. A dual-homed system is a host which resides on at least two different networks. Typically, each network is connected to a separate network interface, though IP aliasing can be used to bind multiple addresses, each on a different subnet, to one physical interface.
In order for the system to forward packets between interfaces, FreeBSD must be configured as a router. Internet standards and good engineering practice prevent the FreeBSD Project from enabling this feature by default, but it can be configured to start at boot by adding this line to /etc/rc.conf:
gateway_enable="YES" # Set to YES if this host will be a gateway
To enable routing now, set the sysctl(8) variable
net.inet.ip.forwarding to
1. To stop routing, reset this variable to
0.
The routing table of a router needs additional routes so it knows how to reach other networks. Routes can be either added manually using static routes or routes can be automatically learned using a routing protocol. Static routes are appropriate for small networks and this section describes how to add a static routing entry for a small network.
Consider the following network:
In this scenario,
RouterA is a FreeBSD machine that is acting as a router to the rest of the Internet. It has a default route set to
10.0.0.1 which allows it to connect with the outside world.
RouterB is already configured to use
192.168.1.1 as its default gateway.
Before adding any static routes, the routing table on
RouterA looks like this:
% netstat -nr Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire default 10.0.0.1 UGS 0 49378 xl0 127.0.0.1 127.0.0.1 UH 0 6 lo0 10.0.0.0/24 link1 UC 0 0 xl0 192.168.1.0/24 link2 UC 0 0 xl1
With the current routing table,
RouterA does not have a route to the
192.168.2.0/24 network. The following command adds the
Internal Net 2 network to
RouterA's routing table using
192.168.1.2 as the next hop:
# route add -net 192.168.2.0/24 192.168.1.2
Now,
RouterA can reach any host on the
192.168.2.0/24 network. However, the routing information will not persist if the FreeBSD system reboots. If a static route needs to be persistent, add it to /etc/rc.conf:
# Add Internal Net 2 as a persistent static route static_routes="internalnet2" route_internalnet2="-net 192.168.2.0/24 192.168.1.2"
The
static_routes configuration variable is a list of strings separated by a space, where each string references a route name. The variable
route_internalnet2 contains the static route for that route name.
Using more than one string in
static_routes creates multiple static routes. The following shows an example of adding static routes for the
192.168.0.0/24 and
192.168.1.0/24 networks:
static_routes="net1 net2" route_net1="-net 192.168.0.0/24 192.168.0.1" route_net2="-net 192.168.1.0/24 192.168.1.1"
31.2.3. 疑難排解
When an address space is assigned to a network, the service provider configures their routing tables so that all traffic for the network will be sent to the link for the site. But how do external sites know to send their packets to the network’s ISP?
There is a system that keeps track of all assigned address spaces and defines their point of connection to the Internet backbone, a particular network.
It is the task of the service provider to advertise to the backbone sites that they are the point of connection, and thus the path inward, for a site. This is known as route propagation.
Sometimes, there is a problem with route propagation and some sites are unable to connect. Perhaps the most useful command for trying to figure out where routing is breaking down is
traceroute. It is useful when
ping fails.
When using
traceroute, include the address of the remote host to connect to. The output will show the gateway hosts along the path of the attempt, eventually either reaching the target host, or terminating because of a lack of connection. For more information, refer to traceroute(8).
31.2.4. 群播 (Multicast) 注意事項
FreeBSD natively supports both multicast applications and multicast routing. Multicast applications do not require any special configuration in order to run on FreeBSD. Support for multicast routing requires that the following option be compiled into a custom kernel:
options MROUTING
The multicast routing daemon, mrouted can be installed using the net/mrouted package or port. This daemon implements the DVMRP multicast routing protocol and is configured by editing /usr/local/etc/mrouted.conf in order to set up the tunnels and DVMRP. The installation of mrouted also installs map-mbone and mrinfo, as well as their associated man pages. Refer to these for configuration examples.
31.3. 無線網路
31.3.1. 無線網路基礎
Most wireless networks are based on the IEEE™™™.
31.3.2. 快速開始
31.3.3. 基礎設定
31.3.3.1. 核心設定
31.3.3.2. 設定正確的區域"
31.3.4. 主從式 (Infrastructure)
Infrastructure (BSS) mode is the mode that is typically used. In this mode, a number of wireless access points are connected to a wired network. Each wireless network has its own name, called the SSID. Wireless clients connect to the wireless access points.
31.3.4.1. FreeBSD 客戶端
31.3.4.1.1. 如何尋找存取點.
31.3.4.1.2. 基礎設定.
31.3.4.1.2.1. 選擇存取點).
31.3.4.1.2.2. 認證.
31.3.4.1.2.3. 使用 DHCP 取得 IP 位址.
31.3.4.1.2.4. 靜態 IP 位址"
31).
31
31.3.4.1.3.2. WPA 加上).
31.3.4.1.3.3. WPA 加上 加上) }
將以下參數加到
31.3.5. 對等式 (Ad-hoc).
31.3.6. FreeBSD 主機存取點
FreeBSD can act as an Access Point (AP) which eliminates the need to buy a hardware AP or run an ad-hoc network. This can be particularly useful when a FreeBSD machine is acting as a gateway to another network such as the Internet.
31.3.6.1. 基礎設定
Before configuring a FreeBSD machine as an AP, the kernel must be configured with the appropriate networking support for the wireless card as well as the security protocols being used. For more details, see 基礎設定."
31.3.6.2. 無認證或加密的 Host-based 存取點
31.3.6.3. WPA2 Host-based 存取點 基礎設定.
31
31.3.6.4. WEP Host-based 存取點
31.3.7. 同時使用有線及無線連線 與容錯移轉 and an example for using both wired and wireless connections is provided at 乙太網路與無線介面間的容錯移轉模式.
31.3.8. 疑難排解.
31.4. USB 網路共享
Many cellphones provide the option to share their data connection over USB (often called "tethering"). This feature uses either the RNDIS, CDC or a custom Apple™iPhone™/iPad™.
31.5. 藍牙
Blu.
31.5.1. 載入藍牙支援
31.5.2. 尋找其他藍牙裝置., though it is normally not required to do this by hand. The stack will automatically terminate inactive baseband connections.
# hccontrol -n ubt0hci disconnect 41 Connection handle: 41 Reason: Connection terminated by local host [0x16]
Type
hccontrol help for a complete listing of available HCI commands. Most of the HCI commands do not require superuser privileges.
31.5.3. 裝置配對"
31.5.4. 使用 PPP Profile 存取網路RFLAN
31.5.5. 藍牙通訊協定
This section provides an overview of the various Bluetooth protocols, their function, and associated utilities.
31.5.5.1. Logical Link Control and Adaptation Protocol (L2CAP)). is similar to netstat(1),
31.5.5.2. Radio Frequency Communication (RFCOMM)
The RFCOMM protocol provides emulation of serial ports over the L2CAP protocol.. FreeBSD, RFCOMM is implemented at the Bluetooth sockets layer.
31.5.5.3. Service Discovery Protocol (SDP) prior information about the services. This process of looking for any offered services is called browsing.
The Bluetooth SDP server, sdpd(8), and command line client, sdpcontrol(8), are included in the standard FreeBSD
Note that each service has a list of attributes, such as the RFCOMM channel. Depending on the service, the user might need to make FreeBSD to Bluetooth clients is done with the sdpd(8) server. The following line can be added to /etc/rc.conf:
sdpd_enable="YES"
#
31.5.5.4. OBEX Object Push (OPUSH)
Object Exchange
31.5.5.5. Serial Port Profile (SPP) -t
31.5.6. 疑難排解 file.
31.6. 橋接).
31.6.1. 開啟橋接.
31.6.2. 開啟 STP and has a path cost of
400000 from this bridge. The path to the root bridge is via
port 4 which is fxp0.
31.6.3. 橋接介面參數
31.6.4. SNMP 監/shared, indicating, the private BEGEMOT-BRIDGE-MIB can be used:
%:
% snmpset -v 2c -c private bridge1.example.com BEGEMOT-BRIDGE-MIB::begemotBridgeDefaultBridgeIf.0 s bridge2
31.7. Link Aggregation 與容錯移轉.
- fec / loadbalance
Cisco™ Fast EtherChannel™ (FEC) is found on older Cisco™ switches. It provides a static setup and does not negotiate aggregation with the peer or exchange frames to monitor the link. If the switch supports LACP, that should be used instead.
- lacp
The IEEE™.
31.7.1. 設定範例
This section demonstrates how to configure a Cisco™™ switch as a single load balanced and fault tolerant link. More interfaces can be added to increase throughput and fault tolerance. Replace the names of the Cisco™™™ switch:
switch# show lacp neighbor Flags: S - Device is requesting Slow LACPDUs F - Device is requesting Fast LACPDUs A - Device is in Active mode P - Device is in Passive mode Channel group 1 neighbors Partner's information: LACP port Oper Port Port Port Flags Priority Dev ID Age Key Number State Fa0 physical wireless interface’s MAC address with that of the Ethernet interface.
In this example, the Ethernet interface, bge0, is the master and the wireless interface, wlan0, is the failover. The wlan0 device was created from iwn0 wireless interface, which will be configured with the MAC address of the Ethernet interface. First, determine the MAC address of the Ethernet
Replace bge0 to match the system’s Ethernet interface name. The
ether line will contain the MAC address of the specified interface. Now, change the MAC address of the underlying wireless interface:
# ifconfig iwn0 ether 00:21:70:da:ae:37
Bring the wireless interface up, but do not set an IP address:
# ifconfig wlan0 create wlandev iwn0 ssid my_router up
Make sure the bge0 interface is up, then create the lagg(4) interface with bge0 as master with failover to wlan0:
# ifconfig bge0 up # ifconfig lagg0 create # ifconfig lagg0 up laggproto failover laggport bge0 laggport wlan0
The virtual interface should look something like this:
# ifconfig lagg0 lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=8<VLAN_MTU> ether 00:21:70:da:ae:37 media: Ethernet autoselect status: active laggproto failover laggport: wlan0 flags=0<> laggport: bge0 flags=5<MASTER,ACTIVE>
Then, start the DHCP client to obtain an IP address:
# dhclient lagg0
To retain this configuration across reboots, add the following entries to /etc/rc.conf:
ifconfig_bge0="up" wlans_iwn0="wlan0" ifconfig_wlan0="WPA" create_args_wlan0="wlanaddr 00:21:70:da:ae:37" cloned_interfaces="lagg0" ifconfig_lagg0="up laggproto failover laggport bge0 laggport wlan0 DHCP"
31.8. PXE 無磁碟作業
The Intel™ Preboot eXecution Environment (PXE) allows an operating system to boot over the network. For example, a FreeBSD system can boot over the network and operate without a local disk, using file systems mounted from an NFS server. PXE support is usually available in the BIOS. To use PXE when the machine starts, select the
Boot from network option in the BIOS setup or type a function key during system initialization.
In order to provide the files needed for an operating system to boot over the network, a PXE setup also requires properly configured DHCP, TFTP, and NFS servers, where:
Initial parameters, such as an IP address, executable boot filename and location, server name, and root path are obtained from the DHCP server.
The operating system loader file is booted using TFTP.
The file systems are loaded using NFS.
When a computer PXE boots, it receives information over DHCP about where to obtain the initial boot loader file. After the host computer receives this information, it downloads the boot loader via TFTP and then executes the boot loader. In FreeBSD, the boot loader file is /boot/pxeboot. After /boot/pxeboot executes, the FreeBSD kernel is loaded and the rest of the FreeBSD bootup sequence proceeds, as described in FreeBSD 開機程序.
This section describes how to configure these services on a FreeBSD system so that other systems can PXE boot into FreeBSD. Refer to diskless(8) for more information.
31.8.1. 設定 PXE 環境
The steps shown in this section configure the built-in NFS and TFTP servers. The next section demonstrates how to install and configure the DHCP server. In this example, the directory which will contain the files used by PXE users is /b/tftpboot/FreeBSD/install. It is important that this directory exists and that the same directory name is set in both /etc/inetd.conf and /usr/local/etc/dhcpd.conf.
Create the root directory which will contain a FreeBSD installation to be NFS mounted:
# export NFSROOTDIR=/b/tftpboot/FreeBSD/install # mkdir -p ${NFSROOTDIR}
Enable the NFS server by adding this line to /etc/rc.conf:
nfs_server_enable="YES"
Export the diskless root directory via NFS by adding the following to /etc/exports:
/b -ro -alldirs 從原始碼更新 FreeBSD the hostname or IP address of the NFS server. In this example, the root file system is mounted read-only in order to prevent NFS clients from potentially deleting the contents of the root file system.
Set the root password in the PXE environment for client machines which are PXE booting :
# chroot ${NFSROOTDIR} # passwd
If needed, enable ssh(1) root logins for client machines which are PXE booting by editing ${NFSROOTDIR}/etc/ssh/sshd_config and enabling
PermitRootLogin. This option is documented in sshd_config(5).
Perform any other needed customizations of the PXE environment in ${NFSROOTDIR}. These customizations could include things like installing packages or editing the password file with vipw(8).
When booting from an NFS root volume, /etc/rc detects the NFS boot and runs /etc/rc.initdiskless. In this case, /etc and /var need to be memory backed file systems so that these directories are writable but the NFS root directory is read-only:
# chroot ${NFSROOTDIR} # mkdir -p conf/base # tar -c -v -f conf/base/etc.cpio.gz --format cpio --gzip etc # tar -c -v -f conf/base/var.cpio.gz --format cpio --gzip var
When the system boots, memory file systems for /etc and /var will be created and mounted and the contents of the cpio.gz files will be copied into them..
31.8.2. 設定 DHCP 伺服器
The DHCP server does not need to be the same machine as the TFTP and NFS server, but it needs to be accessible in the network.
DHCP is not part of the FreeBSD base system but can be installed using the net/isc-dhcp43-server port or package.
Once installed, edit the configuration file, /usr/local/etc/dhcpd.conf. Configure the
next-server,
filename, and
root-path settings as seen in this example:
subnet 192.168.0.0 netmask 255.255.255.0 { range 192.168.0.2 192.168.0.3 ; option subnet-mask 255.255.255.0 ; option routers 192.168.0.1 ; option broadcast-address 192.168.0.255 ; option domain-name-servers 192.168.35.35, 192.168.35.36 ; option domain-name "example.com"; # IP address of TFTP server next-server 192.168.0.1 ; # path of boot loader obtained via tftp filename "FreeBSD/install/boot/pxeboot" ; # pxeboot boot loader will try to NFS mount this directory for root FS option root-path "192.168.0.1:/b/tftpboot/FreeBSD/install/" ; }
The
next-server directive is used to specify the IP address of the TFTP server.
The
filename directive defines the path to /boot/pxeboot. A relative filename is used, meaning that /b/tftpboot is not included in the path.
The
root-path option defines the path to the NFS root file system.
Once the edits are saved, enable DHCP at boot time by adding the following line to /etc/rc.conf:
dhcpd_enable="YES"
Then start the DHCP service:
# service isc-dhcpd start
31.8.3. PXE 問題除錯
Once all of the services are configured and started, PXE clients should be able to automatically load FreeBSD over the network. If a particular client is unable to connect, when that client machine boots up, enter the BIOS configuration menu and confirm that it is set to boot from the network.
This section describes some troubleshooting tips for isolating the source of the configuration problem should no clients be able to PXE boot.
Use the net/wireshark package or port to debug the network traffic involved during the PXE booting process, which is illustrated in the diagram below.图 1. 使用 NFS Root Mount 進行 PXE 開機程序
On the TFTP server, read /var/log/xferlog to ensure that pxeboot is being retrieved from the correct location. To test this example configuration:
# tftp 192.168.0.1 tftp> get FreeBSD/install/boot/pxeboot Received 264951 bytes in 0.1 seconds
Make sure that the root file system can be mounted via NFS. To test this example configuration:
# mount -t nfs 192.168.0.1:/b/tftpboot/FreeBSD/install /mnt
31.9. IPv6
IPv6 is the new version of the well known IP protocol, also known as IPv4. IPv6 provides several advantages over IPv4 as well as many new features:
Its 128-bit address space allows for 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses. This addresses the IPv4 address shortage and eventual IPv4 address exhaustion.
Routers only store network aggregation addresses in their routing tables, thus reducing the average space of a routing table to 8192 entries. This addresses the scalability issues associated with IPv4, which required every allocated block of IPv4 addresses to be exchanged between Internet routers, causing their routing tables to become too large to allow efficient routing.
Address autoconfiguration (RFC2462).
Mandatory multicast addresses.
Built-in IPsec (IP security).
Simplified header structure.
Support for mobile IP.
IPv6-to-IPv4 transition mechanisms.
FreeBSD includes the reference implementation and comes with everything needed to use IPv6. This section focuses on getting IPv6 configured and running.
31.9.1. IPv6 位址的背景知識
There are three different types of IPv6 addresses:
- Unicast
A packet sent to a unicast address arrives at the interface belonging to the address.
- Anycast
These addresses are syntactically indistinguishable from unicast addresses but they address a group of interfaces. The packet destined for an anycast address will arrive at the nearest router interface. Anycast addresses are only used by routers.
- Multicast
These addresses identify a group of interfaces. A packet destined for a multicast address will arrive at all interfaces belonging to the multicast group. The IPv4 broadcast address, usually
xxx.xxx.xxx.255, is expressed by multicast addresses in IPv6.
When reading an IPv6 address, the canonical form is represented as
x:x:x:x:x:x:x:x, where each
x represents a 16 bit hex value. An example is
FEBC:A574:382B:23C1:AA49:4592:4EFE:9982.
Often, an address will have long substrings of all zeros. A
:: (double colon) can be used to replace one substring per address. Also, up to three leading
0s per hex value can be omitted. For example,
fe80::1 corresponds to the canonical form
fe80:0000:0000:0000:0000:0000:0000:0001.
A third form is to write the last 32 bits using the well known IPv4 notation. For example,
2002::10.0.0.1 corresponds to the hexadecimal canonical representation
2002:0000:0000:0000:0000:0000:0a00:0001, which in turn is equivalent to
2002::a00:1.
To view a FreeBSD system’s IPv6 address, use ifconfig(8):
#
In this example, the rl0 interface is using
fe80::200:21ff:fe03:8e1%rl0, an auto-configured link-local address which was automatically generated from the MAC address.
Some IPv6 addresses are reserved. A summary of these reserved addresses is seen in 已保留的 IPv6 位址:
31.9.2. 設定 IPv6
To configure a FreeBSD system as an IPv6 client, add these two lines to rc.conf:
ifconfig_rl0_ipv6="inet6 accept_rtadv" rtsold_enable="YES"
The first line enables the specified interface to receive router advertisement messages. The second line enables the router solicitation daemon, rtsol(8).
If the interface needs a statically assigned IPv6 address, add an entry to specify the static address and associated prefix length:
ifconfig_rl0_ipv6="inet6 2001:db8:4672:6565:2026:5043:2d42:5344 prefixlen 64"
To assign a default router, specify its address:
ipv6_defaultrouter="2001:db8:4672:6565::1"
31.9.3. 連線到"
Next, configure that interface with the IPv4 addresses of the local and remote endpoints. Replace MY_IPv4_ADDR and REMOTE_IPv4_ADDR with the actual IPv4 addresses:
create_args_gif0="tunnel MY_IPv4_ADDR REMOTE_IPv4_ADDR"
To apply the IPv6 address that has been assigned for use as the IPv6 tunnel endpoint, add this line, replacing MY_ASSIGNED_IPv6_TUNNEL_ENDPOINT_ADDR with the assigned address:
ifconfig_gif0_ipv6="inet6 MY_ASSIGNED_IPv6_TUNNEL_ENDPOINT_ADDR"
Then, set the default route for the other side of the IPv6 tunnel. Replace MY_IPv6_REMOTE_TUNNEL_ENDPOINT_ADDR with the default gateway address assigned by the provider:
ipv6_defaultrouter="MY_IPv6_REMOTE_TUNNEL_ENDPOINT_ADDR"
If the FreeBSD system will route IPv6 packets between the rest of the network and the world, enable the gateway using this line:
ipv6_gateway_enable="YES"
31.9.4. Router Advertisement 與 Host Auto Configuration
rtadvd_enable="YES"
It is important to specify the interface on which to do IPv6 router advertisement. For example, to tell rtadvd(8) to use rl0:
rtadvd_interfaces="rl0"
Next, create the configuration file, /etc/rtadvd.conf as seen in this example:
rl0:\ :addrs#1:addr="2001:db8:1f11:246::":prefixlen#64:tc=ether:
Replace rl0 with the interface to be used and
2001:db8:1f11:246:: with the prefix of the allocation.
For a dedicated
/64 subnet, nothing else needs to be changed. Otherwise, change the
prefixlen# to the correct value.
31.9.5. IPv6 與 IPv6 位址對應表
When IPv6 is enabled on a server, there may be a need to enable IPv4 mapped IPv6 address communication. This compatibility option allows for IPv4 addresses to be represented as IPv6 addresses. Permitting IPv6 applications to communicate with IPv4 and vice versa may be a security issue.
This option may not be required in most cases and is available only for compatibility. This option will allow IPv6-only applications to work with IPv4 in a dual stack environment. This is most useful for third party applications which may not support an IPv6-only environment. To enable this feature, add the following to /etc/rc.conf:
ipv6_ipv4mapping="YES"
Reviewing the information in RFC 3493, section 3.6 and 3.7 as well as RFC 4038 section 4.2 may be useful to some administrators.
31.10. 共用位址備援協定 .
31.10.1. 使用 CARP 於 FreeBSD 10 及之後版本.
31.10.2. 使用 CARP 於 FreeBSD 9 及先前版本.
31.11. VLANs
VLANs are a way of virtually dividing up a network into many different subnetworks, also referred to as segmenting. Each segment will have its own broadcast domain and be isolated from other VLANs.lansem:. | https://docs.freebsd.org/zh-tw/books/handbook/advanced-networking/ | 2021-06-12T20:31:55 | CC-MAIN-2021-25 | 1623487586390.4 | [array(['../../../../images/books/handbook/advanced-networking/static-routes.png',
'static routes'], dtype=object) ] | docs.freebsd.org |
Constructs new ActorRenderer object without parts
Constructs new ActorRenderer object, based on one of default Minecraft render templates
default template name
Adds a child model part of an existing one
child model name
parent model name
Adds a root model part
Class, upon which armor and attachments render is based It is a model that consists of parts, same as in deprecated Render, but more abstract, allows creating root parts instead of inheritance from old humanoid model | https://docs.mineprogramming.org/classes/actorrenderer.html | 2021-06-12T19:42:01 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.mineprogramming.org |
Dataset Collection System
The Dataset Collection System enables developers to capture images, model predictions, and model-based annotations directly from their mobile app.
Core ConceptsCore ConceptsUser-generated Image
User-generated images are captured during real-world use of your application. Because they are collected from the end user, they are most representative of how your model is used in production, making them extremely valuable for monitoring model accuracy and retraining models over time.
Model AnnotationsModel Annotations
Model annotations are annotations (keypoints, bounding boxes, etc.) generated from predictions made with model-based images. By pairing images with model predictions, developers and product managers gain full visibility into what users experience when using an app in production.
User AnnotationsUser Annotations
User annotations are annotations (keypoints, bounding boxes, etc.) submitted by a user. They represent the model output a user expected. User annotations complement model annotations by allowing app makers to measure the gap between actual and expected behavior. User annotations also function as ground-truth data for model retraining.
Model-based Image CollectionModel-based Image Collection
A model-based image collection stores all images, model annotations, and user annotations associated with a single model.
note
Privacy is important and one of the main benefits of on-device machine learning. When collecting data from users, always make sure you have their explicit permission.
Getting StartedGetting Started
Currently the Dataset Collection System only supports iOS and pose estimation models. To request accuess or inquire about support for a specific use case, contact us.
1. Register your model with the Fritz SDK1. Register your model with the Fritz SDK
Once you have created a Fritz AI account and been granted access, you'll need to upload a model and register it with the SDK.
2. Create a Model-based Image Collection2. Create a Model-based Image Collection
From the Datasets tab in the webapp, select
ADD IMAGE COLLECTION. Select the "Model-based" radio button and click Next. model-based image collection, any missing objects are added.
3. Use the
record method on the predictor to collect data
Each
FritzVision predictor has a
record method that allows you to send the model's input image,
output predictions, and any user-modified annotations back to the Fritz webapp.
With images and model predictions, developers can assess model performance and indentify common errors. User-modified annotations provide an opportunity to crowdsource annotations from real-world use.
Implementation details for specific predictors and platforms can be found in documentation for each individual model type:
4. Inspect collected images and annotations in the browser4. Inspect collected images and annotations in the browser
Images and annotations can be viewed in the model-based image collection created in the browser. Select a given image to see additional details and switch between model and user annotations. Click the CREATE DATASET button to create a COCO-formatted export of your collection that can be used for measuring model accuracy or retraining.
Want to get started? Sign-up. | https://docs.fritz.ai/dataset/collection-system/ | 2021-06-12T21:20:30 | CC-MAIN-2021-25 | 1623487586390.4 | [array(['/img/dataset/dataset_collection_header.jpg',
'Dataset Collection Header'], dtype=object)
array(['/img/dataset/dataset_collection_view.jpg',
'Dataset Collection Header'], dtype=object)] | docs.fritz.ai |
publishing-api: RabbitMQ
For information about how we use RabbitMQ, see here.
Post-publishing notifications
After an edition is changed, a message is published to RabbitMQ. It will be
published to the
published_documents topic exchange with the routing_key
"#{edition.schema_name}.#{event_type}". Interested parties can subscribe to
this exchange to perform post-publishing actions. For example, a search
indexing service would be able to add/update the search index based on these
messages. Or an email notification service would be able to send email updates
(see).
event_type
major: Used when an edition is published with an
update_typeof major.
minor: Used when an edition is published with an
update_typeof minor.
republish: Used when an edition is published with an
update_typeof republish.
links: Used whenever links related to an edition have changed.
unpublish: Used when an edition is unpublished.
Previewing a message for a document_type
After content is updated, a message is generated and published to the RabbitMQ exchange. Each message has a shared format, however the contents of the message is affected by the publishing app and what data it sends over.
As messages for different formats can vary, we have created a rake task to allow us to easily generate example messages. The example message is generated from the most recently published message (based off of last public_updated_at) for the entered document type:
$ bundle exec rake queue:preview_recent_message[<document_type>] | https://docs.publishing.service.gov.uk/apps/publishing-api/rabbitmq.html | 2021-06-12T20:47:13 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.publishing.service.gov.uk |
public abstract class DecoratingNavigationHandler extends.
handleNavigation(javax.faces.context.FacesContext, String, String, NavigationHandler),
DelegatingNavigationHandlerProxy
handleNavigation
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
protected DecoratingNavigationHandler()
protected DecoratingNavigationHandler(NavigationHandler originalNavigationHandler)
originalNavigationHandler- the original NavigationHandler to decorate
public final NavigationHandler getDecoratedNavigationHandler()
public final void handleNavigation(FacesContext facesContext, String fromAction, String outcome)
handleNavigationmethod delegates to the overloaded variant, passing in constructor-injected NavigationHandler as argument.
handleNavigationin class
NavigationHandler
handleNavigation(javax.faces.context.FacesContext, String, String, javax.faces.application.NavigationHandler)
public abstract void handleNavigation(FacesContext facesContext, String fromAction, String outcome,(FacesContext facesContext, String fromAction, String outcome, Navigation | https://docs.spring.io/spring-framework/docs/4.1.0.RC1/javadoc-api/org/springframework/web/jsf/DecoratingNavigationHandler.html | 2021-06-12T20:53:01 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.spring.io |
Creating Releases¶
This document serves as a guideline on how to create releases of your project. Additionally, it may provide pointers about best practices and development workflows.
Assuming that you start on the development branch with a
x.x.x-SNAPSHOTversion, you should ensure that your changelog is complete.
The next step is then to bump your version to a release version:
$ mlf-core bump-version x.x.x .
Then create a
release/x.x.x branchand submit a pull request from it against the
masterbranch.
Any people that you have worked with including yourself should now review this pull request and fix and remaining issues.
After pull request approval you merge the pull request into the
masterbranch. Afterwards create a release on Github with the tag x.x.x and insert your changelog into the description and add any additional details that you deem important. A new Docker container should now be building with the latest version.
Switch back to the
developmentbranch and merge the latest
masterbranch into it. Next, according to semantic versioning and your planned features bump the version to a higher
-SNAPSHOTversion. The changelog will automatically add sections. | https://mlf-core.readthedocs.io/en/latest/creating_releases.html | 2021-06-12T20:46:45 | CC-MAIN-2021-25 | 1623487586390.4 | [] | mlf-core.readthedocs.io |
Tutorial¶
Disclaimer¶
Warning
This document serves as a single page tutorial for mlf-core, the issue of deterministic machine learning and everything related. It is not supposed to be used as a reference documentation for specific pieces of information. Please use the remaining mlf-core or the respective tools’ documentation for this purpose. Although, mlf-core is designed with users in mind and as easy as possible it is inherently complex due to the nature of the issue it solves. Hence, please be patient while working through this tutorial.
Introduction¶
The fields of machine learning and artificial intelligence grew immensly in recent years. Nevertheless, many papers cannot be reproduced and it is difficult for scientists even after rigorous peer review to know which results to trust. This serious problem is known as the reproducibility crisis in machine learning. The reasons for this issue are manifold, but include the fact that major machine learning libraries default to the usage of non-deterministic algorithms based on atomic operations. Solely fixing all random seeds is not sufficient for deterministic machine learning. Fortunately, major machine learning libraries such as Pytorch, Tensoflow and XGBoost are aware of these issues and the they are slowly providing more and more deterministic variants of these atomic operations based algorithms. We evaluated the current state of deterministic machine learning and formulated a set of requirements for fully reproducible machine learning even with several GPUs. Based on this evaluation we developed the mlf-core ecosystem, an intuitive software solution solving the issue of irreproducible machine learning.
mlf-core Overview¶
The mlf-core ecosystem consists of the primary Python packages mlf-core and system-intelligence, a set of GPU enable docker containers <> and various fully reproducible machine learning projects found in the mlf-core Github organization.
This tutorial will primarily focus on the mlf-core Python package since it is the part that users will knowingly use the most. Additionally, mlf-core makes heavy use of Conda, Docker, Github and Github Actions. To follow the tutorial you should also have Conda, Docker and nvidia-docker installed and tested. Please follow the respective installation instructions found on the tools’ websites. We strongly suggest that you look for tutorials on Youtube or your favorite search engine to get comfortable with these technologies before proceeding further. Whenever we use more advanced features of these tools we will explain them. Therefore you don’t need to be an expert, but a good overview is helpful.
Installation¶
The mlf-core Python package is available on PyPI and the latest version can be installed with
$ pip install mlf-core
It is advised to use a virtual environment for mlf-core since it relies on explicitly pinning many requirements. To verify that your installation was successful run:
$ mlf-core --help
Configuration¶
mlf-core tightly (optionally, but strongly recommended) integrates with Github and wants to prevent overhead when creating several projects. Therefore mlf-core requires a little bit of configuration before the first usage. To configure mlf-core run:
$ mlf-core config all
Enter your full name, your email and your Github username (hit enter if not available). Next you will be asked whether you want to update your Github personal access token. mlf-core requires your Github access token to automatically create a Github repository to upload your code and to enable mlf-core’s sync functionality (explained later). Hence, answer with y. Now you will be prompted for the token. To create a token go to Github and log in. Next, click on your profile avater and navigate to ‘Settings’.
Now navigate to the ‘Developer settings’.
Click on ‘Developer settings’ in the bottom left. Then access ‘Personal access token’ and click ‘Generate new token in the top right. You should now be prompted for your password. Enter a name for the note that clearly specifies what it is for e.g. ‘mlf-core token’. Tick all options in the following image:
Select all of the in the screenshot ticked options. No additional options are required, especially not repository deletion.¶
Click ‘Generate token’ at the very bottom and copy your token into the prompt of mlf-core. Hit enter and accept the update. mlf-core is now configured and ready to be used!
For more details including security precautions please visit Configure mlf-core and Github Support.
Creating a mlf-core project¶
mlf-core offers templates for several machine learning libraries. To get an overview of all available machine learning templates run:
$ mlf-core list
If you want a more detailed overview you can also run:
$ mlf-core info <template-handle/type/library>
A more detailed overview on all available templates is provided here.
In the follow sections we will create and focus on a Pytorch based template identified under the template handle
mlflow-pytorch.
The outlined processes work the same for all other templates.
To create a mlf-core project run:
$ mlf-core create
mlflowand
pytorchafterwards. Enter a project name, a project description, hit enter for the version prompt and selected a license of your choosing. MIT and the Apache 2.0 license are common choices. Next, hit the
ybutton when asked whether you want to create a Github repository and push your code to it. If you select
nas in no and create a Github repository manually, mlf-core will not be able to set up required secrets for features such as Docker container building and mlf-core sync.
yor
n. The project creation process will now end with mlf-core lint verifying the successful creation if your project and the link to your Github repository being printed.
mlf-core project overview¶
Using
tree we identify the following file structure:
├── .bandit.yml <- Configuration file for Bandit (identifies security issues in the code) ├── CHANGELOG.rst <- Changelog of the project (controlled by mlf-core bump-version) ├── CODE_OF_CONDUCT.rst ├── Dockerfile <- Dockerfile specifying how the Docker container is build; Uses the environment.yml file to create a Conda environment inside the container ├── docs │ ├── authors.rst │ ├── changelog.rst │ ├── code_of_conduct.rst │ ├── conf.py <- Sphinx configuration file │ ├── index.rst <- Root of the documentation; defines the toctree │ ├── make.bat <- Windows version of the Makefile │ ├── Makefile <- Makefile for the documentation (run make html to build the documentation) │ ├── model.rst <- Model documentation │ ├── readme.rst │ ├── requirements.txt <- Defines Python dependencies for the documentation │ ├── _static │ │ └── custom_cookietemple.css <- Custom dark documentation style │ └── usage.rst <- How to use the mlf-core model ├── .editorconfig <- Configuration for IDEs and editors ├── environment.yml <- Defines all dependencies for your project; Used to create a Conda environment inside the Docker container ├── project_name │ ├── data_loading │ │ ├── data_loader.py <- Loading and preprocess training/testing data │ ├── mlf_core │ │ └── mlf_core.py <- mlf-core internal code to run system-intelligence and advanced logging; Should usually not be modified │ ├── model │ │ ├── model.py <- Model architecture │ ├── project_name.py <- Entry point for MLflow; Connects all pieces ├── .flake8 <- flake8 configuration file (lints code style) ├── .gitattributes <- git configuration file ├── .github │ ├── ISSUE_TEMPLATE │ │ ├── bug_report.md │ │ ├── feature_request.md │ │ └── general_question.md │ ├── pull_request_template.md │ └── workflows │ ├── lint.yml <- Runs mlf-core lint and flake8 on push events │ ├── master_branch_protection.yml <- Protects the master branch from non-release merges │ ├── publish_docker.yml <- Publishes the Docker container on Github Packages (or alternatives) │ ├── publish_docs.yml <- Publishes the documentation on Github Pages or Read the Docs │ ├── sync.yml <- Checks for new mlf-core templates versions and triggers a PR with changes if found; Runs daily │ └── train_cpu.yml <- Trains the model with a reduced dataset on the CPU ├── .gitignore ├── LICENSE ├── mlf_core.cfg <- mlf-core configuration file (sync, bump-version, linting, ...) ├── .mlf_core.yml <- Meta information of the mlf_core.yml file; Do not edit! ├── MLproject <- MLflow Project file; Defines entry point and parameters ├── README.rst └── .readthedocs.yml <- Read the Docs configuration file
Now would be a good time to explore the specific files to understand how everything is connected. Do not worry if there appear to be an overwhelming amount of files. With just a little bit of experience you will easily understand which files you should edit and which ones can be safely ignored. We will now examine a couple of files more closely. Note that for visual reasons a couple of lines are removed in this tutorial.
CI & CD with Github Actions¶
All mlf-core based projects use Github Actions for continous integration (CI) and continous development (CD). As soon as your project is on Github all Github Actions are enabled automatically. The purpose of these workflows will be explained throughout this tutorial.
MLProject¶
The MLproject file is the primary configuration file for MLflow. It defines with which runtime environment the project is run, configures them and configures MLflow entry points.
name: project_name # conda_env: environment.yml docker_env: image: ghcr.io/github_user/project_name:0.1.0-SNAPSHOT volumes: ["${PWD}/data:/data"] environment: [["MLF_CORE_DOCKER_RUN", "TRUE"],["CUBLAS_WORKSPACE_CONFIG", ":4096:8"]] entry_points: main: parameters: max_epochs: {type: int, default: 5} gpus: {type: int, default: 0} accelerator: {type str, default: "None"} lr: {type: float, default: 0.01} general-seed: {type: int, default: 0} pytorch-seed: {type: int, default: 0} command: | python project_name/project_name.py \ --max_epochs {max_epochs} \ --gpus {gpus} \ --accelerator {accelerator} \ --lr {lr} \ --general-seed {general-seed} \ --pytorch-seed {pytorch-seed}
mlf-core projects by default run with Docker. If you prefer to run your project with Conda you need to comment in
conda_env and comment out
docker_env and its associated configuration. We are currently working on easing this switching, but for now it is a MLflow limitation.
The
image by default points to the Docker image build on Github Packages which automatically happens on project creation.
Moreover, all runs mount the data directory in the root folder of the project to
/data inside the container.
Therefore, you need to ensure that your data either resides in the data folder of your project or adapt the mounted volumes to include your training data.
mlf-core also presets environment variables required for deterministic machine learning. Do not modify them without an exceptional reason.
Finally, the
project_name.py file is set as an entry point and all parameters are defined and passed with MLflow.
Dockerfile¶
The Dockerfile usually does not need to be adapted. It is based on a custom mlf-core base container which provides CUDA, Conda and other utilities.
FROM mlfcore/base:1.2.0 # Install the conda environment COPY environment.yml . RUN conda env create -f environment.yml && conda clean -a # Activate the environment RUN echo "source activate exploding_springfield" >> ~/.bashrc ENV PATH /home/user/miniconda/envs/exploding_springfield/bin:$PATH # Dump the details of the installed packages to a file for posterity RUN conda env export --name exploding_springfield > exploding_springfield_environment.yml
The Docker container simply uses the environment.yml file to create a Conda environment and activates it. You can find the base container definitions in the mlf-core containers repository.
environment.yml¶
The
environment.yml file is used for both, running the mlf-core project with Conda, and for creating the Conda environment inside the Docker container.
Therefore you only need to specify your dependencies once in this file.
Try to always define all dependencies from Conda channels if possible and only add PyPI dependencies if a Conda version is not available.
However, note that only the version combinations of the template were tested to be deterministic and to create valid environments.
We encourage you to regularly upgrade your dependencies, but do so at your own risk!
name: project_name channels: - defaults - conda-forge - pytorch dependencies: - defaults::cudatoolkit=11.0.221 - defaults::python=3.8.2 - conda-forge::tensorboardx=2.1 - conda-forge::mlflow=1.13.1 - conda-forge::rich=9.10.0 - pytorch::pytorch=1.7.1 - pytorch::torchvision=0.8.2 - pytorch-lightning==1.1.8 - pip - pip: - pycuda==2019.1.2 # not on Conda - cloudpickle==1.6.0 - boto3==1.17.7 - system-intelligence==2.0.2
If you have dependencies that are not available on Conda nor PyPI you can adapt the Docker container.
Post project creation TODOs¶
mlf-core tries to automate as much as possible, but some minor actions need to be done manually.
Public Docker container on Github Packages¶
mlf-core by default pushes the Docker container using the
publish_docker.yml Github Actions workflow to Github Packages.
If you want to push your Docker container to a different registry you need to adapt the workflow and potentially update the username and add a Github secret for your password.
By default, containers pushed to Github are private. As a result you would need to log in to pull the container.
Hence, you have to make your Docker container public by navigating to the used Github account, selecting
Packages and then your package.
As of writing this, there is a bug with the GitHub UI, that doesn’t show private images without selecting the visibility. Click visibility, and then private, and select your docker image.
On the right you will find a button
package settings.
Scroll down on the package settings page and at the bottom you will find a button
Change visibility.
Select Public, type in your project name, click it, authenticate and your Github container is now public!
Be aware of the fact that building the Docker container usually takes 15-20 minutes and therefore your Docker container will not immediately show up in the Packages tab.
Publish documentation on Github Pages or Read the Docs¶
mlf-core projects offers a Sphinx based documentation setup which can easily be hosted on either Github Pages or Read the Docs. The choice is yours. Note that you may need to update the badge in the README of your project.
Github Pages¶
The
publish_docs.yml Github action pushes your built documentation automatically to a branch called
gh-pages.
Hence, you only need to enable Github Pages on this branch.
Please follow the final steps (6-8 at time of writing) of the official Github - creating your site documentation.
Read the Docs¶
Please follow the offical Read the Docs - Building your documentation documentation.
Training models with mlf-core¶
mlf-core models are designed to easily run on any hardware with the same runtime environment.
First, select the runtime environment by commenting either Conda or Docker in or out as described above.
Depending on the used template the commands for training a model on the CPU, a GPU or multiple GPUs may slightly differ.
In all cases they are described in the usage.rst file.
Remember that MLflow parameters are passed as
-P key=val and Docker parameters as
-A key=val or
-A key.
For our just created
mlflow-pytorch project, assuming that we are in the root directory of the project, we run our project as follows:
Interactive visualization¶
Congratulations, you have just trained your first GPU deterministic model! All metrics and models are saved in the
mlruns directory.
A couple of metrics were already printed onto the terminal. However, due to the tight MLflow integration there are more ways to visualize our results.
mlflow UI¶
To open the mlflow UI simply run
mlflow ui in the root directory of your project.
Note that if you trained on a different machine than you now want to open the MLflow web interface, you should run
mlf-core fix-artifact-paths on the local machine.
This will ensure that all artifacts are visible. Open the URL shown in the terminal in your browser.
You should be greeted with something like this:
All runs are grouped into experiments together with a run status. Simply click on a specific run to see more details:
When clicking on one of the metrics you can also view for example a line plot of the performance over time or per epoch.
The MLflow web interface can also be hosted somewhere and be made accessible to other collaborators. Consult the MLflow documentation for this purpose.
Serving a mlf-core model¶
A benefit of MLflow is that it allows you to easily serve your model to make it available to other users:
$ mlflow models serve -m <path to the model>
will spin up a server to which you can send requests to and will receive predictions as answers! Please follow the MLflow deployment documentation.
Developing mlf-core projects¶
mlf-core offers additional functionality that eases development. A subset of these features and general development tips are the focus of this section.
git branches and development flow¶
As soon as your project is pushed to Github you will see that four branches are used:
A
master/mainbranch. This branch should at any point only contain the latest release.
A
developmentbranch. Use this branch to collect all development milestones.
A
TEMPLATEbranch. This branch is used for syncing (see below). Do not touch it.
A
gh-pagesbranch. The built documentation is pushed to this branch. You should not have to edit it manually.
While developing always merge first into the
development branch.
If you think that your code is ready to become a new release create a release branch such as:
release-1.0.0.
Now open a pull request from the release branch into the
master branch and have any collaborators review it.
When ready merge it into the master branch and create a new Github release. This will trigger a release build of your Docker container.
Rebuilding the Docker container¶
Whenever you add new libraries to the
environment.yml file simply push to the development branch.
Your Docker container will rebuild and overwrite the latest development container.
Increasing the project version with mlf-core bump-version¶
Increasing the version of a project across several files is cumbersome.
Hence, mlf-core offers a
mlf-core bump-version command.
Considering that a usual project starts as a
0.1.0-SNAPSHOT version (SNAPSHOT equals unstable development version) you should,
following the development flow introduced above, increase the version on the release branch:
$ mlf-core bump-version 0.1.0 .
This will update the version of all files and add a new section in the changelog which you should continously keep up to date. For more details please visit Bumping the version of an existing project.
Ensuring determinism with mlf-core lint¶
Determinism is the heart and soul of mlf-core projects. Ideally you, as a user of mlf-core, do not need to know how mlf-core ensures determinism behind the scenes. The only thing that you have to do is to periodically run:
$ mlf-core lint
on your project. You will be made aware of any violations of known non-determinism and how to fix them. This ensures that you can fix the issues by yourself and learn in the process without requiring expert knowledge beforehand.
Example of a mlf-core lint run. The usage of the function
bincount was found, which is known to operate non-deterministically. It has to be replaced.¶
mlf-core lint is also run on any push event to any branch on your Github repository.
For more details please read Linting your project.
Utilizing the MLFCore singleton class¶
When you start to build your model you will notice several
MLFCore function calls already built in.
These calls set all required random seeds and log the hardware together with the runtime environment.
Moreover, the
MLFCore singleton allows for data tracking with MD5 sums.
These functions can be found in
mlf_core/mlf_core.py if you want to peak under the hood.
Usually they should neither be modified nor removed without any strong reason.
It’s also maintained by the linter in-case anything gets changed on accident.
To log your input data use:
from mlf_core.mlf_core import MLFCore MLFCore.log_input_data('data/')
Keeping mlf-core based projects up to data with mlf-core sync¶
mlf-core continously tries to update all project templates to adhere to the latest best practices and requirements for deterministic machine learning. Whenever mlf-core releases a new version and updated templates you will automatically receive a pull request with the latest changes. You should then try to integrate them as fast as possible and to create a minor release.
For more details and configuration options please visit Syncing your project.
Contributing to mlf-core¶
There are various ways of contributing to mlf-core. First you can make your best practice model available by forking your project to the mlf-core organization or by developing it there directly. Be aware that we would like to discuss this first with you to ensure that only well developed or finished projects are in the mlf-core organization. This increases the visibility of your project and is a seal of quality. Moreover, you can join the Community Discord via this link. We are looking forward to meeting you and are always available to help if required! | https://mlf-core.readthedocs.io/en/latest/tutorial.html | 2021-06-12T19:40:45 | CC-MAIN-2021-25 | 1623487586390.4 | [array(['_images/navigate_developer_settings.png',
'Github settings navigation'], dtype=object)
array(['_images/token_settings.png', 'Github settings navigation'],
dtype=object)
array(['_images/step_1.png', 'Creating a public docker image'],
dtype=object)
array(['_images/step_2.png', 'Private image bug'], dtype=object)
array(['_images/step_3.png', 'Click your image'], dtype=object)
array(['_images/step_4.png', 'Click Package Settings'], dtype=object)
array(['_images/step_5.png', 'Click Change visibility'], dtype=object)
array(['_images/step_6.png', 'Click Public'], dtype=object)
array(['_images/linting_example.png', 'mlf-core lint example'],
dtype=object) ] | mlf-core.readthedocs.io |
You are viewing documentation for Kubernetes version: v1.20
Kubernetes v1.20 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.
Synopsis
Checks expiration for the certificates in the local PKI managed by kubeadm.
kubeadm certs check-expiration [flags]
Options
Options inherited from parent commands
Last modified November 12, 2020 at 9:28 PM PST: kubeadm: promote the "kubeadm certs" command to GA (#24410) (d0c6d303c) | https://v1-20.docs.kubernetes.io/docs/reference/setup-tools/kubeadm/generated/kubeadm_certs_check-expiration/ | 2021-06-12T21:15:08 | CC-MAIN-2021-25 | 1623487586390.4 | [] | v1-20.docs.kubernetes.io |
Introduction to Creating REST Services
This chapter introduces REST and REST services in InterSystems IRIS®.
Introduction to REST
REST, which is named from “Representational State Transfer,” has the following attributes:
REST is an architectural style rather than a more commonly uses JSON which is a lightweight data wrapper. JSON identifies data with tags but the tags are not specified in a formal schema definition and do not have explicit data types.
Introduction to InterSystems REST Services
There are two ways to define REST interfaces in InterSystems IRIS 2019.2 and later:
Specification-first definition — you first create an OpenAPI 2.0 specification and then use the API Management tools to generate the code for the REST interface.
Manually coding the REST interface.
Using a specification-first definition, an InterSystems REST service formally consists of the following components:
A specification class (a subclass of %REST.Spec
Opens in a new window). This class contains the OpenAPI 2.0 specification
Opens in a new window for the REST service. InterSystems supports several extension attributes that you can use within the specification.
A dispatch class (a subclass of %CSP.REST
Opens in a new window). This class is responsible for receiving HTTP requests and calling suitable methods in the implementation class.
An implementation class (a subclass of %REST.Impl
Opens in a new window). This class defines the methods that implement the REST calls.
The API management tools generate a stub version of the implementation class, which you then expand to include the necessary application logic. (Your logic can of course invoke code outside of this class.)
The %REST.Impl
Opens in a new window class provides methods that you can call in order to set HTTP headers, report errors, and so on.
An InterSystems web application, which provides access to the REST service via the InterSystems Web Gateway. The web application is configured to enable REST access and to use the specific dispatch class. The web application also controls access to the REST service.
InterSystems follows a strict naming convention for these components. Given an application name (appname), the names of the specification, dispatch, and implementation class are appname.spec, appname.disp, and appname.impl, respectively. The web application is named /csp/appname by default but you can use a different name for that.
InterSystems supports the specification-first paradigm. You can generate initial code from the specification, and when the specification changes (for example, by acquiring new end points), you can regenerate that code. Later sections provide more details, but for now, note that you should never edit the dispatch class, but can modify the other classes. Also, when you recompile the specification class, the dispatch class is regenerated automatically and the implementation class is updated (preserving your edits).
Manually Coding REST Services
In releases before 2019.2, InterSystems IRIS did not support the specification-first paradigm. A REST service formally consisted only of a dispatch class and a web application. This book refers to this way to define REST services as manually-coded REST services. The distinction is that a REST service defined by newer REST service includes a specification class, and a manually-coded REST service does not. The “Creating a REST Service Manually” appendix of this book describes how to create REST services using the manual coding paradigm. Similarly, some of the API management utilities enable you to work with manually-coded REST services.
Introduction to InterSystems API Management Tools
To help you create REST services more easily, InterSystems provides the following API management tools:
A REST service named /api/mgmnt, which you can use to discover REST services on the server, generate OpenAPI 2.0 specifications
Opens in a new window for these REST services, and create, update, or delete REST services on the server.
The ^%REST routine, which provides a simple command-line interface that you can use to list, create, and delete REST services.
The %REST.API
Opens in a new window class, which you can use to discover REST services on the server, generate OpenAPI 2.0 specifications
Opens in a new window for these REST services, and create, update, or delete REST services on the server.
You can set up logging for these tools, as described later in this chapter.
Helpful third-party tools include REST testing tools such as PostMan (
Opens in a new window) and the Swagger editor (
Opens in a new window).
Overview of Creating REST Services
The recommended way to create REST services in InterSystems products is roughly as follows:
Obtain (or write) the OpenAPI 2.0 specification
Opens in a new window for the service.
Use the API management tools to generate the REST service classes and the associated web application. See these chapters:
“Using the /api/mgmnt Service to Create REST Services”
“Using the ^%REST Routine to Create REST Services”
“Using the %REST.API Class to Create REST Services”
Modify the implementation class so that the methods contain the suitable business logic. See the chapter “Modifying the Implementation Class.”
Optionally modify the specification class. See the chapter “Modifying the Specification Class.” For example, do this if you need to support CORS or use web sessions.
If security is required, see the chapter “Securing REST Services.”
Using the OpenAPI 2.0 specification
Opens in a new window for the service, generate documentation as described in the chapter “Discovering and Documenting REST APIs.”
For step 2, another option is to manually create the specification class (pasting the specification into it) and then compile that class; this process generates the dispatch and stub implementation class. That is, it is not strictly necessary to use either the /api/mgmnt service or the ^%REST routine. This book does not discuss this technique further.
A Closer Look at the REST Service Classes
This section provides a closer look at the specification, dispatch, and implementation classes.
Specification Class
The specification class is meant to define the contract to be followed by the REST service. This class extends %REST.Spec
Opens in a new window and contains an XData block that contains the OpenAPI 2.0 specification
Opens in a new window for the REST service. The following shows a partial example:
Class petstore.spec Extends %REST.Spec [ ProcedureBlock ] { XData OpenAPI [ MimeType = application/json ] { { "swagger":"2.0", "info":{ "version":"1.0.0", "title":"Swagger Petstore", "description":"A sample API that uses a petstore as an example to demonstrate features in the swagger-2.0 specification", "termsOfService":"", "contact":{ "name":"Swagger API Team" }, "license":{ "name":"MIT" } }, ...
You can modify this class by replacing or editing the specification within the XData block. You can also add class parameters, properties, and methods as needed. Whenever you compile the specification class, the compiler regenerates the dispatch class and updates the implementation class (see “How InterSystems Updates the Implementation Class,” later in this book).
Dispatch Class
The dispatch class is directly called when the REST service is invoked. The following shows a partial example:
/// Dispatch class defined by RESTSpec in petstore.spec Class petstore.disp Extends %CSP.REST [ GeneratedBy = petstore.spec.cls, ProcedureBlock ] { /// The class containing the RESTSpec which generated this class Parameter <Route Url="/pets" Method="post" Call="addPet" /> <Route Url="/pets/:id" Method="get" Call="findPetById" /> <Route Url="/pets/:id" Method="delete" Call="deletePet" /> </Routes> } /// Override %CSP.REST AccessCheck method ClassMethod AccessCheck(Output pAuthorized As %Boolean) As %Status { ... } ...
Notice that the SpecificationClass parameter indicates the name of the associated specification class. The URLMap XData block (the URL map) defines the calls within this REST service. It is not necessary for you to have a detailed understanding of this part of the class.
After these items, the class contains the definitions of the methods that are listed in the URL map. Here is one example:
ClassMethod deletePet(pid As %String) As %Status { Try { If '##class(%REST.Impl).%CheckAccepts("application/json") Do ##class(%REST.Impl).%ReportRESTError(..#HTTP406NOTACCEPTABLE,$$$ERROR($$$RESTBadAccepts)) Quit If ($number(pid,"I")="") Do ##class(%REST.Impl).%ReportRESTError(..#HTTP400BADREQUEST,$$$ERROR($$$RESTInvalid,"id",id)) Quit Set response=##class(petstore.impl).deletePet(pid) Do ##class(petstore.impl).%WriteResponse(response) } Catch (ex) { Do ##class(%REST.Impl).%ReportRESTError(..#HTTP500INTERNALSERVERERROR,ex.AsStatus()) } Quit $$$OK }
Notice the following points:
This method invokes a method by the same name in the implementation class (petstore.impl in this example). It gets the response from that method and calls %WriteResponse() to write the response back to the caller. The %WriteResponse() method is an inherited method that is present in all implementation classes, which are all subclasses of %REST.Impl
Opens in a new window.
This method does other checking and in case of errors, invokes other methods of %REST.Impl
Opens in a new window.
Because the dispatch class is a generated class, you should never edit it. InterSystems provides mechanisms for overriding parts of the dispatch class without editing it.
Implementation Class
The implementation class is meant to hold the actual internal implementation of the REST service. You can (and should) edit this class. It initially looks like the following example:
/// A sample API that uses a petstore as an example to demonstrate features in the swagger-2.0 specification<br/> /// Business logic class defined by RESTSpec in petstore.spec<br/> Class petstore.impl Extends %REST.Impl [ ProcedureBlock ] { /// If ExposeServerExceptions is true, then details of internal errors will be exposed. Parameter ExposeServerExceptions = 0; /// Returns all pets from the system that the user has access to<br/> /// The method arguments hold values for:<br/> /// tags, tags to filter by<br/> /// limit, maximum number of results to return<br/> ClassMethod findPets(tags As %ListOfDataTypes(ELEMENTTYPE="%String"), limit As %Integer) As %Stream.Object { //(Place business logic here) //Do ..%SetStatusCode(<HTTP_status_code>) //Do ..%SetHeader(<name>,<value>) //Quit (Place response here) ; response may be a string, stream or dynamic object } ...
The rest of the implementation class contains additional stub methods that look similar to this one. In each case, these stub methods have signatures that obey the contract defined by the specification of the REST service. Note that for the options method, InterSystems does not generate a stub method for you to implement. Instead, the class %CSP.REST
Opens in a new window automatically performs all options processing.
Enabling Logging for API Management Features
To enable logging for the API management features, enter the following in the Terminal:
set $namespace="%SYS" kill ^ISCLOG set ^%ISCLOG=5 set ^%ISCLOG("Category","apimgmnt")=5
Then the system adds entries to the ^ISCLOG global for any calls to the API management endpoints.","apimgmnt")=0 | https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GREST_INTRO | 2021-06-12T20:53:08 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.intersystems.com |
JeraSoft Documentation Portal
Docs for all releases
JeraSoft Development is excited to announce the major release of VCS 3.15.0. This version includes a variety of features and system capabilities for VCS users. The full list of changes is provided below.
We'd like to remind you about a key change that took place in 3.14.0 - support of XML-RPC protocol in Management API was deprecated and it will be completely removed in 3.16.0. At the same time, JSON-RPC is and will be further supported.
A series of changes and improvements have been introduced to the section, including:
Screenshot: Error notification
By clicking on Download file, a .csv file with the following columns is downloaded:
Screenshot: .csv file with detailed error data
AttentionAttention
Starting from VCS 3.15.0, any rate table will no longer contain Rate Formulas tab, where user was able to specify the number of seconds system would consider as a minute
Service column has been added
Effective From column has been added
Time Profile column has been renamed to Profile
AZ Mode/ Close in (days) column has been moved to Step 2: Import Settings
In 3.15.0, Code Rules structure is as follows:
Screenshot: Code Rule information block
A couple of crucial improvements of Rate Notification service have been added to the this version.
From now on, if any rate table (child) in the system has the assigned parent rate table, clients will be notified through Rate Notification service about changes in both tables.
If child and parent rate tables both have the rule for the same code, priority is given to a child one. However, if the rule in a child rate table has expired due to End date field value, and a parent rule is still active, notifications will regard the latter one.
The following major features have been added to DID Management section:
Screenshot: Export DIDs button
Screenshot: Package column in exported file
Throughout the whole section, namely in the section table, on a DID creation page and during DIDs import/export, Notes field is added. It allows user to leave a detailed clarification or any additional information regarding a certain DID.
When selecting rows and columns during DIDs import, now you can specify not only DID columns, but Operators, Status, After Hold, Tag columns as well. Check out our User Guide for more information.
Screenshot: Rows and Columns select
In order to make access to Client Portal easier and less complicated, the following steps have been undertaken:
Enable client’s panel checkbox in Settings section has been removed.
Access select field from Client profile settings has been deleted.
New Password field is now renamed to Password.
Now, to get into Client Portal, all you need to do is simply set Login and Password fields in Client’s Panel information block of respective profile settings and access the portal. More info on accessing Client Portal can be found in this User Guide article.
Screenshot: Package field on Client Portal
Screenshot: Limits History information block
AttentionAttention
In the previous version, user could come across an issue when Factors Watcher would block all termination client's accounts instead of termination clients himself according to the watcher rule. In 3.15.0, this issue has been solved.
VCS 3.15.0 introduces a new integration with ECSS-10 Softswitch via RADIUS. A detailed information on this topic can be found in our Integration Manual.
The following JeraSoft VCS directories have been renamed:
/opt/jerasoft/vcs-data/external/cdrs folder is now /opt/jerasoft/vcs-data/external/xdrs
/opt/jerasoft/vcs-data/external/cdrs_parsed folder is now /opt/jerasoft/vcs-data/external/xdrs_parsed
/opt/jerasoft/vcs-data/external/cdrs_corrupted folder is now /opt/jerasoft/vcs-data/external/xdrs_corrupted
Screenshot: Total row in Orig-Term Report
While specifying columns that will be included into xDR file, attached to PDF invoice in Invoice Template, you’ll be able to select Taxes column starting from this version.
Screenshot: Taxes column in Invoices Templates
Since it’s considered a good practice to regulate the process of adding new payments and charges, the Author column has been added to the system. Whenever a new transaction (irrespective of its type) is added manually by a user through Transactions section in JeraSoft VCS or Refill Balance page on JeraSoft Client Portal, his/her name is displayed in the respective column. In case of automatically generated transaction, however, this column is left empty.
Screenshot: Author column in Transactions section
Two additional fields have been added to Advanced Search drop-down menu (
) in Routing Plan section:
Additional DR plan - only those routing plans that have the selected plan assigned, as an additional one, will be displayed
TERM Client - if the selected termination client is assigned to a plan rule, such DR plan will be displayed
Screenshot: Advanced Search drop-down menu in Routing Plan section
To make the process of report creation easier for users, it was decided to relocate Search fields from drop-down menu to a section header right on top of main area. Moreover, Query buttons in main area have been renamed to Query xDR.
Screenshot: Mismatches Report
The following changes have been made to Advanced Search drop-down menu () in Clients and Accounts sections:
Clients section:
Account IP field has been renamed to Account Name / ANI / IP
Screenshot: Advanced Search drop-down menu in Clients section
Accounts section:
Name, ANI, and IP fields have been united into a single Name / ANI / IP field
Screenshot: Advanced Search drop-down menu in Accounts section
Now, reaching our support team has become even easier with a new Get Support option. By clicking on Get Support button (see creenshot below) in the bottom-left corner of any page you will be forwarded to the page where you can send trouble ticket by filling in the corresponding form. | https://docs.jerasoft.net/display/RN/Release+3.15.0 | 2021-06-12T20:19:07 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.jerasoft.net |
EmR1905 Rule Text
Please see the EmR1905 info page for dates, other information, and more related documents.
Department of Public Instruction (PI)
Administrative Code Chapter Affected:
Ch. PI 40 (Revised)
Related to: The Early College Credit Program and Changes to PI 40 as a result of 2017 Wisconsin Act 59
Comment on this emergency rule
Related documents:
EmR1905 Fiscal Estimate | https://docs.legis.wisconsin.gov/code/register/2019/758a2/register/emr/emr1905_rule_text/emr1905_rule_text | 2021-06-12T21:06:58 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.legis.wisconsin.gov |
ATtiny404404 ID for board option in “platformio.ini” (Project Configuration File):
[env:ATtiny404] platform = atmelmegaavr board = ATtiny404
You can override default ATtiny404 settings per build environment using
board_*** option, where
*** is a JSON object path from
board manifest ATtiny404.json. For example,
board_build.mcu,
board_build.f_cpu, etc.
[env:ATtiny404] platform = atmelmegaavr board = ATtiny404 ; change microcontroller board_build.mcu = attiny404 ; change MCU frequency board_build.f_cpu = 16000000L | https://docs.platformio.org/en/stable/boards/atmelmegaavr/ATtiny404.html | 2021-06-12T21:09:35 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.platformio.org |
streamstats
Description
Adds summary statistics to all search results in a streaming manner.
The
streamstats command is similar to the
eventstats command except that it uses events before a given event to compute the aggregate statistics applied to each event. If you want to include the given event in the stats calculations, use
current=true, which is the default.
The
streamstats command is also similar to the
stats command in that
streamstats calculates summary statistics on search results. Unlike
stats, which works on the results as a whole,
streamstats calculates statistics for each event at the time the event is seen.
Syntax
streamstats [current=<bool>] [window=<int>]
- current
- Syntax: current=<bool>
- Description: If true, tells the search to include the given, or current, event in the summary calculations. If false, tells the search to use the field value from the previous event.
- Default: true
- window
- Syntax: window=<int>
- Description: The window option specifies the number of events to use when computing the statistics.
- Default: 0, which means that all previous (plus current) events are used.
- global
- Syntax: global=<bool>
- Description: Defines whether the window is global or for each field in the by clause. If
global=falseand
windowis set to a non-zero value, a separate window is used for each group of values of the group by fields.
- Default: true
-
streamstatscommand. Each time you invoke the
streamstatscommand, you can use one or more functions. However, you can only use one
BYclause. See Usage.
- The following table lists the supported functions by type of function. For descriptions and examples, see Statistical and charting functions.
Usage.
Examples
Example 1.
Example 2
This example uses
streamstats to produce hourly cumulative totals for category values.
... | timechart span=1h sum(value) as total by category | streamstats global=f sum(total) as accu_total
The
timechart command buckets the events into spans of 1 hour and counts the total values for each category. The
timechart command also fills NULL values, so that there are no missing values. Then, the
streamstats command is used to calculate the accumulated total.
Example 3.
More examples
Example 1:
Compute the average value of foo for each value of bar including only 5 events, specified by the window size, with that value of bar.
... | streamstats avg(foo) by bar window=5 global=f
Example 2:
For each event, compute the average of field foo over the last 5 events, including the current event. Similar to doing trendline sma5(foo)
... | streamstats avg(foo) window=5
Example 3:
See also
accum, autoregress, delta, fillnull, eventstats, trendline
Answers
Have questions? Visit Splunk Answers and see what questions and answers the Splunk community has using the stream! | https://docs.splunk.com/Documentation/Splunk/6.3.2/SearchReference/streamstats | 2021-06-12T21:38:15 | CC-MAIN-2021-25 | 1623487586390.4 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
datalad.api.install¶
datalad.api.
install(path=None, source=None, dataset=None, get_data=False, description=None, recursive=False, recursion_limit=None, save=True, reckless=False, jobs='auto')¶
Install a dataset from a (remote) source.
This command creates a local sibling of an existing dataset from a (remote) location identified via a URL or path. Optional recursion into potential subdatasets, and download of all referenced data is supported. The new dataset can be optionally registered in an existing superdataset by identifying it via the dataset argument (the new dataset’s path needs to be located within the superdataset for that)..
When only partial dataset content shall be obtained, it is recommended to use this command without the get-data flag, followed by a
get()operation to obtain the desired data.
Note
Power-user info: This command uses git clone, and git annex init to prepare the dataset. Registering to a superdataset is performed via a git submodule add operation in the discovered superdataset. | https://datalad.readthedocs.io/en/stable/generated/datalad.api.install.html | 2019-02-16T02:50:15 | CC-MAIN-2019-09 | 1550247479838.37 | [] | datalad.readthedocs.io |
This topic provides information on BMC Event Adapters requirements on various platforms.
BMC Event Adapter requirements for Microsoft Windows
BMC Event Adapter requirements for Solaris
BMC Event Adapter requirements for Linux
BMC Event Adapter requirements for AIX and HP-UX
Perl requirements for BMC Event Adapter:.
1 Comment
Anuparn Padalia
Please share all Compatible versions of Active Perl we can use. It should not be restricted to version 5.14. Kindly refer below link for details around versions available for Perl to download - | https://docs.bmc.com/docs/display/public/TSOMD107/BMC+Event+Adapters+requirements | 2019-02-16T04:19:29 | CC-MAIN-2019-09 | 1550247479838.37 | [] | docs.bmc.com |
11 GIF File Writing
The file/gif library provides functions for writing GIF files to a stream, including GIF files with multiple images and controls (such as animated GIFs).
A GIF stream is created by gif-start, and then individual images are written with gif-add-image. Optionally, gif-add-control inserts instructions for rendering the images. The gif-end function ends the GIF stream.
A GIF stream can be in any one of the following states:
'init : no images or controls have been added to the stream
'image-or-control : another image or control can be written
'image : another image can be written (but not a control, since a control was written)
'done : nothing more can be added
The width and height determine a virtual space for the overall GIF image. Individual images added to the GIF stream must fit within this virtual space. The space is initialized by the given background color.
Finally, the default meaning of color numbers (such as the background color) is determined by the given colormap, but individual images within the GIF file can have their own colormaps.
A global colormap need not be supplied, in which case a colormap must be supplied for each image. Beware that bg-color is ill-defined if a global colormap is not provided.
If interlaced? is true, then bstr should provide bytes ininterlaced order instead of top-to-bottom order. Interlaced order is:
every 8th row, starting with 0
every 8th row, starting with 4
every 4th row, starting with 2
every 2nd row, starting with 1
If a global color is provided with gif-start, a #f value can be provided for cmap.
The bstr argument specifies the pixel content of the image. Each byte specifies a color (i.e., an index in the colormap). Each row is provided left-to-right, and the rows provided either top-to-bottom or in interlaced order (see above). If the image is prefixed with a control that specifies an transparent index (see gif-add-control), then the corresponding “color” doesn’t draw into the overall GIF image.
An exception is raised if any byte value in bstr is larger than the colormap’s length, if the bstr length is not width times height, or if the top, left, width, and height dimensions specify a region beyond the overall GIF image’s virtual space.
The GIF image model involves processing images one by one, placing each image into the specified position within the overall image’s virtual space. An image-control command can specify a delay before an image is added (to create animated GIFs), and it also specifies how the image should be kept or removed from the overall image before proceeding to the next one (also for GIF animation).
The disposal argument specifies how to proceed:
'any : doesn’t matter (perhaps because the next image completely overwrites the current one)
'keep : leave the image in place
'restore-bg : replace the image with the background color
'restore-prev : restore the overall image content to the content before the image is added
If wait-for-input? is true, then the display program may wait for some cue from the user (perhaps a mouse click) before adding the image.
The delay argument specifies a delay in 1/100s of a second.
If the transparent argument is a color, then it determines an index that is used to represent transparent pixels in the follow image (as opposed to the color specified by the colormap for the index).
An exception is raised if a control is already added to stream without a corresponding image.
An exception is raise if some control or image has been added to the stream already.
An exception is raised if an image-control command was just written to the stream (so that an image is required next).
An exception is raised if an image-control command was just written to the stream (so that an image is required next).
Given a set of pixels expressed in ARGB format (i.e., each four bytes is a set of values for one pixel: alpha, red, blue, and green), quantize produces produces
bytes for the image (i.e., a array of colors, expressed as a byte string)
a colormap
either #f or a color index for the transparent “color”
The conversion treats alpha values less than 128 as transparent pixels, and other alpha values as solid.
The quantization process uses Octrees [Gervautz1990] to construct an adaptive palette for all (non-transparent) colors in the image. This implementation is based on an article by Dean Clark [Clark1996].
To convert a collection of images all with the same quantization, simply append them for the input of a single call of quantize, and then break apart the result bytes. | https://docs.racket-lang.org/file/gif.html | 2019-02-16T03:00:12 | CC-MAIN-2019-09 | 1550247479838.37 | [] | docs.racket-lang.org |
Portaudio: Bindings for the Portaudio portable sound library
1 Using Windows, Choosing Host APIs
Using Portaudio on Windows raises a few extra challenges. In particular, Windows machines generally support a number of different "Host API"s that Portaudio can use to interact with the machine. In addition, these Host APIs may also target multiple different devices.
The default Host API for windows is MME. My observations suggest that this API is limited; it can open only a small number of simultaneous streams, and the latency for playing sounds is extremely high.
The WASAPI API (if that’s not redundant) has its own issues; in particular, it seems to be necessary to manually set the playback device to the right sample rate (often 44100Hz) before starting DrRacket. Failing to do so simply results in an "invalid device" error from Portaudio.
To address these issues, Portaudio includes a number of functions used to control the selection of the host API.:
3 Playing Streams
Note that the buffer length may be longer than the specified length, if the provided length is too short for the chosen device.
The function returns a list containing three functions: one that queries the stream for a time in seconds, one that returns statistics about the stream, and a third long buffer of 0.2 seconds (= 200 milliseconds) so that most GC pauses won’t interrupt it.
However, this a latency of 200ms is be pretty terrible for an interactive system. I usually use 50ms,.
4 Recording Sounds
This library also provides a high-level interface for recording sounds of a fixed length.
5 A Note on Memory, Synchronization, and Concurrency
Note: the following is not organized to the high standards of a technical paper. The Management would like to apologize in advance, and humbly requests your forgiveness.
Interacting with sound libraries is tricky. The basic framework for this library is what’s called a "pull" architecture; the OS makes a call to a callback every 5-50ms[*], asking for new data to be shoveled into a given buffer. This callback runs on a separate OS thread, which means that Racket must somehow synchronize with this thread to provide data when needed.
One difficulty here is that Racket is garbage-collected, with GC pauses that typically run from 50ms to 100ms. This means that when a program is generating garbage, there are simply bound to be hiccoughs in a stream-based program. In general, these don’t seem to be too awful, and it’s often possible to write programs that generate very little garbage.
After trying several architectures, the model that seems to work the best is a shared-memory design, where the callback is written entirely in C, and takes its data from a buffer shared with Racket. If Racket has written the data into the buffer, then this routine copies it into the OS’s buffer. If not, then it just zeros out the buffer to play silence.
5.1 Copying Vs. Streaming
This package supports two different play interfaces: a "copying" interface and a "streaming" interface.
The copying interface is simple: Racket stuffs an entire sound into a buffer, then opens a new stream, providing a callback that pulls samples out of the buffer until it’s done. This means that the sound is not affected by GC pauses or Racket’s speed. On the other hand, it means duplicating the entire sound (expensive, for large sounds), and it requires a platform that can support multiple streams simultaneously. (OS X, yes. Windows, usually no.) Also, it tends to have higher startup latency (especially on windows), because there’s time required to start a new stream. Finally, it requires pre-rendering of the entire sound, meaning that interactivity is out.
The streaming interface solves these problems, but exposes more of the grotty stuff to the programmer. Rather than providing sound data, the user provides a racket callback that can generate sound data on demand. If the given callback can’t keep up with the demand, the stream starts to hiccough.
More specifically, this package uses a ring buffer, whose length can be specified independently of the underlying machine latency. The Portaudio engine calls the user’s racket callback quite frequently–on the order of every 1-5ms–to top up this ring buffer. When GC pauses occur, the C callback will drink up everything left in the ring buffer, and then just play silence.
Choosing the length of this ring buffer is therefore difficult: too short, and you’ll hear frequent hiccoughs as the C callback runs out of data. Too long, and you get high-latency, sluggish response. Times on the order of 50ms seem to be an acceptable compromise.
5.2 Memory
Shared memory management is a big pain. Racket is garbage-collected, but it’s interacting with an audio library that is not. It’s nearly impossible to avoid all possible race conditions related to the free-ing of memory.
The first and largest issue is the block of memory shared between the Racket engine and the C callback. The current setup is that the memory is freed by a close-stream callback associated with the stream on the Portaudio side. The sequence is therefore this: Racket calls CloseStream. Portaudio then stops calling the callback, and closes the stream. Then, it calls the provided "all-done" callback, which frees the memory. One note here is that Racket should probably wrap the pointer in a mutable object so that it can be severed on the Racket side when the stream is closed. Actually, that’s true of the stream, as well.
[*] Different platforms are different; currently, this package insists on a latency of at most 50ms, or it just refuses to run. It appears that all modern platform can provide this, though it’s sometimes a bit tricky to decide which output device to use. | https://docs.racket-lang.org/portaudio/ | 2019-02-16T03:42:06 | CC-MAIN-2019-09 | 1550247479838.37 | [] | docs.racket-lang.org |
Naming Conventions¶
The following naming conventions are used by all generator documentation. They help you to comprehend the documentation more easily. This is important in order not to mix the different levels of abstraction.
Generators¶
Wherever the name of a code generator is mentioned, that name is shown with an italic font and a grey background color, like this:
JSF Web Fragment
JPA
Model Elements¶
The Virtual Developer Modeler employs textual modeling languages. In the documentation, a model element is typically mentioned as a keyword along with a name [name]. Here is an example of model element: type [name]. In continuous text, keywords and names are highlighted as in the text above.
File and Directory Names¶
File and directory names are shown with a bold font. Those names typically contain names of model elements. The portion of a file name that comes from a model element name is shown in square brackets and with a yellow background like this:
Model: type [name]
File Name Rule: [Name]EventHandler.java
Normally, the name of a model element is not taken as is but the first character of a model element name is either made to be upper case or lower case. The case of the first character of the [Name] part tells you, which rule is being applied.
Example: type myType with the file name rule [Name]EventHandler.java results in the following file name:
MyTypeEventHandler.java
If generation logic for more than one model element with different keywords has to be explained, the keyword is prepended to the ‘Name’ word like this:
Model:
kw1 [kw1-name]
kw2 [kw2-name]
File Name: [kw1-Name][kw2-Name]EventHandler.java
Code and Configuration¶
Text that is found in code or configuration files is highlighted like this:
MyType
<init-param>paramName</init-param>
Extracts from code or configuration files that span more than one line are shown like this:
public class MyType { private String name; ... }
In contrast to the display rules for file and directory names, a model element name is not highlighted with a yellow background color in order not to decrease the readability of the code/configuration. | https://docs.virtual-developer.com/technical-documentation/gapp-generators/naming-conventions/index.html | 2019-02-16T03:15:38 | CC-MAIN-2019-09 | 1550247479838.37 | [] | docs.virtual-developer.com |
Bulk Scheduling of Multiple Items¶
Using the Scheduled Events widget, you can schedule multiple drafts or revisions for simultaneous publication with a bulk schedule. This feature allows you to implement broad changes to the content of your site for a specific event, like a holiday, major news story, or product launch.
A bulk schedule includes a trigger publication date. When a bulk schedule is launched, the Dashboard operates in event-scheduling mode. Any revision that you publish while the Dashboard is in this mode becomes a scheduled event associated with the bulk schedule.
Creating a Bulk Schedule¶
In the Dashboard’s Scheduled Events widget, click New.
Set the New Schedule options.
The Trigger Date is the date when all events associated with this schedule are published.
Saving the new schedule immediately puts the Dashboard into event-scheduling mode, as indicated by the message bar at the top of the Dashboard.
Scheduling Publish Events¶
If the Dashboard is not running in event-scheduling mode, click View All in the Scheduled Events widget.
You are prompted to select a schedule.
Click the bulk schedule that you want to start.
This action puts the Dashboard into event-scheduling mode.
Create a new item or edit an existing item.
Add or modify content and click the Publish button in the Editorial Toolbar.
A scheduled version is added to the Revisions Widget.
Return to the Dashboard.
The new event appears in the Scheduled Events widget.
Continue publishing content as described in the previous steps.
The Scheduled Events widget reflects all of the events scheduled for publication for the bulk schedule.
To stop publishing with the bulk schedule, click Stop Scheduling in the message bar at the top of the Dashboard.
Editing or Deleting Bulk Schedules¶
If the Dashboard is not already running in event-scheduling mode, click View All in the Scheduled Events widget, and start a bulk schedule. This action puts the Dashboard into event-scheduling mode.
In the schedule message at the top of the Dashboard, click the schedule you want to delete. The Edit Schedule widget appears.
Do one of the following:
- To edit the schedule, modify the fields and click Save.
- To delete the schedule, click Delete Permanently. | http://docs.brightspot.com/cms/editorial-guide/publishing-process/schedule/bulk-schedule.html | 2019-02-16T02:51:45 | CC-MAIN-2019-09 | 1550247479838.37 | [] | docs.brightspot.com |
Contents Service Management Previous Topic Next Topic Roles installed with SM Planned Maintenance Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Roles installed with SM Planned Maintenance SM Planned Maintenance adds the following roles. Role title [name] Description plan_maint_admin Administrator for planned maintenance. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/helsinki-service-management-for-the-enterprise/page/product/service-management-planned-maintenance/reference/r_RolesInstallWServMgmtPlanMaint.html | 2019-02-16T04:03:25 | CC-MAIN-2019-09 | 1550247479838.37 | [] | docs.servicenow.com |
Contents Now Platform Capabilities Previous Topic Next Topic Manage Certification template versions Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Manage Certification template versions You can view and manage all versions of a template from the Template form. Open any version of a template. The Other Versions related list displays all other versions of this template, both active and inactive. Click any version in the related list to display the record for that version. Update the template to create a new version. The system creates a new version of the template when you edit any field except Description and Active. You can manage the template versions without returning to the list view. Figure 1. Other Templates To make an inactive template the current version, open that version, edit it if desired, and then click Revert. Figure 2. Revert Template This action does the following: Deactivates the previously active version of the template. Copies the inactive template. Makes the new copy the current, active version. Select the Audits related list to view all audits configured to use this template. Figure 3. Templates Audits Related List Click New to create a new audit record with the template selection and table pre-populated. Related TasksCreate or edit a certification templateClone a Certification templateDelete a Certification templateClone a Certification templateCreate or edit a certification templateDelete a Certification templateRelated ConceptsCertification templatesRelated ReferenceCertification template audit typesCertification Template Record ListCertification template audit typesCertification Template Record List On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/product/compliance/task/t_CertificationTemplateManagement.html | 2019-02-16T04:10:00 | CC-MAIN-2019-09 | 1550247479838.37 | [] | docs.servicenow.com |
You can create and use vRealize Automation blueprints to provision machines as registered Docker Container hosts.
For a provisioned machine to be registered as a container host, it must meet the following requirements:
The machine is provisioned by a blueprint that contains Containers-specific custom properties.
The required container-specific custom properties are supplied in two property groups. See Using Container Properties and Property Groups in a Blueprint.
For information about using custom properties and property groups in vRealize Automation, see Custom Properties and the Property Dictionary.
The machine is accessible over the network.
For example, the machine must have a valid IP address and be powered on.
You can define a vRealize Automation blueprint to contain specific custom properties that designate a machine as a container host when provisioned using the blueprint.
When a machine with the required blueprint properties is successfully provisioned, it is registered in the Containers and receives events and actions from vRealize Automation. | https://docs.vmware.com/en/vRealize-Automation/7.4/com.vmware.vra.prepare.use.doc/GUID-D0F7440D-2E65-40B9-8095-655BBE8D9B3A.html | 2019-02-16T03:13:35 | CC-MAIN-2019-09 | 1550247479838.37 | [] | docs.vmware.com |
Project Structure
└── /<root> ├── All.less ├── All.js └── styleguide ├── chefs | ├── Chefs.hbs | ├── Chefs.json | └── Chefs.less └── recipes ├── Recipes.hbs ├── Recipes.json ├── Recipes.less └── breakfast ├── EggsBenedict.hbs ├── EggsBenedict.json └── EggsBenedict.less
Styleguide’s UI lists your templates based on the project structure; for details, see Template Listing.
See also: | http://docs.brightspot.com/styleguide/intro/project-structure.html | 2019-02-16T04:01:09 | CC-MAIN-2019-09 | 1550247479838.37 | [] | docs.brightspot.com |
Contents Now Platform Capabilities Previous Topic Next Topic Join a team Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Join. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/use/live-feed/task/t_JoinATeam.html | 2019-02-16T04:07:04 | CC-MAIN-2019-09 | 1550247479838.37 | [] | docs.servicenow.com |
You are here: Start » Technical Issues » Troubleshooting
Troubleshooting
This article describes the most common problems that might appear when building and executing programs that use Adaptive Vision Library.
Problems with Building
error LNK2019: unresolved external symbol _LoadImageA referenced in function
error C2039: 'LoadImageA' : is not a member of 'avl'
The problem is related to including the "windows.h" file. It defines a macro called LoadImage, which has the same name as one of the functions of Adaptive Vision Library. Solution:
- Don't include both "windows.h" and "AVL.h" in a single compilation unit (cpp file).
- Use
#undef LoadImageafter including "windows.h".
error LNK1123: failure during conversion to COFF: file invalid or corrupt
If you encounter this problem, just disable the incremental linking (properties of the project | Configuration Properties | Linker | General | Enable Incremental Linking, set to No (/INCREMENTAL:NO)). This is a known issue of VS2010 and more information can be found on the Internet. Installing VS2010 Service Pack 1 is an alternative solution.
Exceptions Thrown in Run Time
Exception from the avl namespace is thrown
Adaptive Vision Library uses exceptions to report errors in the run-time. All the exceptions are defined in avl namespace and derive from avl::Error. To solve the problem, add a try/catch statement and catch all avl::Error exceptions (or only selected derived type). Every avl::Error object has the Message() method which should provide you more detailed information about the problem. Remember that a good programming practice is catching C++ exceptions by a const reference.
try { // your code here } catch (const atl::Error& er) { cout << er.Message(); }
High CPU Usage When Running AVL Based Image Processing
When working with some AVL image processing functions it is possible that the reported CPU usage can reach 50~100% across all CPU cores even in situations when the actual workload does not justify that hight CPU utilization. This behavior is a side effect of a parallel processing back-end worker threads actively waiting for the next task. Although the CPU utilization is reported to be high those worker threads will not prevent other task to be executed when needed, so this behavior should not be a problem in most situations.
For situations when it is not desired this behavior can be changed (e.g. when profiling the application, performance testing or in any situation, when high CPU usage interfere with other system). To block the worker threads from idling for extended period of time the environment variable OMP_WAIT_POLICY must be set to the value PASSIVE, before the application is started:
set OMP_WAIT_POLICY=PASSIVE
This variable is checked when the DLLs are loaded, so setting it from the application code might not be effective. | https://docs.adaptive-vision.com/current/avl/technical_issues/Troubleshooting.html | 2019-02-16T03:31:02 | CC-MAIN-2019-09 | 1550247479838.37 | [] | docs.adaptive-vision.com |
Contents Now Platform User Interface Previous Topic Next Topic Keyboard shortcuts Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Keyboard shortcuts You can use keyboard shortcuts to quickly perform common actions in the user interface. Access keys depend on the browser and operating system you are using. Available keyboard shortcuts are based on the UI version. Table 1. UI16 keyboard shortcuts Action Windows keyboard shortcut Mac keyboard shortcut Activate global search field Access key + Ctrl + G Access key + G Toggle application navigator Access key + Ctrl + C Access key + C Activate navigation filter field Access key + Ctrl + F Access key + F Impersonate user Access key + Ctrl + I Access key + I Table 2. UI15 and UI11 keyboard shortcuts Action Keyboard shortcut Notes Activate global search field Access key + S Toggle banner frame Access key + B Ensure that the page focus is on the content frame Toggle application navigator Access key + N Activate navigation filter field Access key + F Toggle list and form view Access key + V Toggle horizontal split Access key + H Maximize the current pane Access key + M Send email Access key + S Submit form Enter On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/helsinki-platform-user-interface/page/use/navigation/reference/r_KeyboardShortcuts.html | 2019-02-16T04:05:10 | CC-MAIN-2019-09 | 1550247479838.37 | [] | docs.servicenow.com |
Contents Now Platform Capabilities Previous Topic Next Topic Edge Encryption logging Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Edge Encryption logging Edge Encryption logs information on the instance and on each proxy server. You can view a log file listing all cases of when an attempt was made to save unencrypted data to an encrypted field. Navigate to Edge Encryption Configuration > Diagnostics And Troubleshooting > Invalid Insert Attempts. An administrator can trace back to why data was not saved. There are encryption log files on each proxy server. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/administer/edge-encryption/concept/c_logging.html | 2019-02-16T04:14:23 | CC-MAIN-2019-09 | 1550247479838.37 | [] | docs.servicenow.com |
#include <wx/clntdata.h>
All classes deriving from wxEvtHandler (such as all controls and wxApp) can hold arbitrary data which is here referred to as "client data".
This is useful e.g. for scripting languages which need to handle shadow objects for most of wxWidgets' classes and which store a handle to such a shadow class as client data in that class. This data can either be of type void - in which case the data container does not take care of freeing the data again or it is of type wxClientData or its derivatives. In that case the container (e.g. a control) will free the memory itself later. Note that you must not assign both void data and data derived from the wxClientData class to a container.
Some controls can hold various items and these controls can additionally hold client data for each item. This is the case for wxChoice, wxComboBox and wxListBox. wxTreeCtrl has a specialized class wxTreeItemData for each item in the tree.
If you want to add client data to your own classes, you may use the mix-in class wxClientDataContainer.
Constructor.
Virtual destructor. | https://docs.wxwidgets.org/3.1.0/classwx_client_data.html | 2019-02-16T03:03:22 | CC-MAIN-2019-09 | 1550247479838.37 | [] | docs.wxwidgets.org |
Date
The date field allows you to select a date via a friendly UI. This field uses the jQuery UI datepicker library to select a date.
Screenshots
Settings
Besides the common settings, this field has the following specific settings, the keys are for use with code:
Timezone and timestampt
This field gets the current time of your local computer and converts it to the timestamp value. So the Unix timestamp saved is the "timezone (UTC) + offset".
This is a sample field settings array when creating this field with code:
[
'name' => 'Date picker',
'id' => 'field_id',
'type' => 'date',
'js_options' => [
'dateFormat' => 'yy-mm-dd',
'showButtonPanel' => false,
],
'inline' => false,
'timestamp' => false,
],
Data
If the
timestamp is set to
true, the field value is converted to Unix timestamp and saved to the database. Otherwise, the user input value is saved.
Saving dates in another format
Meta Box already supports customizing the date format displaying to users via
js_options. For example, you can set it to
dd-mm-yy.
However, you might want to save the date in another format, like
Y-m-d, which allows you to sort or query posts by date. To do that, set the value of "Save format" to "Y-m-d".
If you use code, then the field settings will look like this:
[
'js_options' => [
'dateFormat' => 'dd-mm-yy',
],
'save_format' => 'Y-m-d',
],
So when displaying to users, the date will have the format of
30-01-2019, and when saving to the database, it will have the format of
2019-01-30.
Template usage
Displaying the value:
<p>Entered: <?php rwmb_the_value( 'my_field_id' ) ?></p>
Getting the value:
<?php $value = rwmb_meta( 'my_field_id' ) ?>
<p>Entered: <?= $value ?></p>
Converting timestamp to another format:
If you save the field value as a timestamp, then you can convert the value to different format, like this:
<?php $value = rwmb_meta( 'my_field_id' ) ?>
<p>Event date: <?= date( 'F j, Y', $value ) ?></p>
Or simpler:
<p>Event date: <?php rwmb_the_value( 'my_field_id', ['format' => 'F j, Y'] ) ?></p>
info
The 2nd parameter of rwmb_the_value() accepts and extra parameter "format" which specify the datetime format to output in the frontend.
Querying posts by date:
Saving values in timestamp allows you to query posts with a specific order by this field:
$query = new WP_Query( [
'post_type' => 'event',
'orderby' => 'meta_value_num',
'meta_key' => 'my_field_id',
'order' => 'ASC',
] );
However, you still can sort posts by meta value if you set date format to something similar to
yy-mm-dd. Anyway, querying posts by custom fields is not recommended. | https://docs.metabox.io/fields/date/ | 2022-05-16T11:15:33 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.metabox.io |
Data tables plot mode and a tabular view of the underlying numerical table data.
The save button lets you export the diagram to a PDF or PNG graphics file, or, if the tabular view is currently active, the numerical values to a text file.
The plot area supports mouse navigation:
Left click: show plot coordinates at cursor position
Left button + drag: zoom in to rectangular region
Left button + Shift + drag: panning
Mouse wheel: zoom in/out
Right click: reset view | https://docs.ovito.org/reference/data_inspector/data_tables.html | 2022-05-16T12:53:46 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.ovito.org |
[−][src]Crate tiny_multihash
Multihash implementation.
Feature Flags
Multihash has lots of feature flags, by default, all features (except for
test) are
enabled.
Some of them are about specific hash functions, these are:
blake2b: Enable Blake2b hashers
blake2s: Enable Blake2s hashers
sha1: Enable SHA-1 hashers
sha2: Enable SHA-2 hashers
sha3: Enable SHA-3 hashers
strobe: Enable Strobe hashers
In order to enable all hashers, you can set the
all feature flag.
The library has support for
no_std, if you disable the
std feature flag.
The
multihash-impl feature flag enables a default Multihash implementation that contains all
bundled hashers (which may be disabled via the feature flags mentioned above). If only want a
specific subset of hash algorithms or add one which isn't supporte by default, you will likely
disable that feature and enable
derive in order to be able to use the
Multihash derive.
The
test feature flag enables property based testing features. | https://docs.rs/tiny-multihash/latest/tiny_multihash/ | 2022-05-16T13:16:32 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.rs |
Introduction
UP42 currently provides numerous images captured by aircraft sensors and spaceborne platforms. Depending on the sensor type, UP42 distributes geospatial datasets with different spatial, spectral and temporal resolutions. Before ordering, you can check the list of available geospatial datasets and get familiar with the technical specifications. After discovering the datasets, you can select a specific dataset that suits your needs and perform a search over your area of interest.
Geospatial Datasets
Data Discovery
This section explains how to discover geospatial datasets provided by the UP42 platform. Users can leverage the power of the API that unifies search and access to geospatial datasets from multiple data providers.
To proceed, please go to Data Discovery (API).
Data Search
This section explains how to search for geospatial datasets available for your area of interest.
Perform a data search based on the AOI coordinates (latitude, longitude), acquisition interval and number of images to be retrieved. To proceed, please go to Data Search (API).
Download Quicklooks
This section explains how to download the quicklooks of the images from the data search results.
Downloading quicklooks is a free service that allows you to visually examine the low-resolution previews before purchasing the geospatial datasets. To proceed, please go to Download Quicklooks (API). | https://docs.up42.com/data/discover-data | 2022-05-16T12:27:27 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.up42.com |
ramp~
Description
The ramp~ object generates a single signal ramp between a start and end value when it detects a change in an audio signal connected to its left inlet. The duration of the ramp can be a fixed value or based on the signal in ramp~ 's second inlet.
Arguments.
duration [float]
Sets the duration of the ramp in milliseconds. The ramp object uses this value only when a signal is not connected to the second inlet.
end [float]
Set the final value of the ramp. )
mode [int]
Sets whether the duration of the ramp is recomputed if the value of the signal connected to the second (duration) inlet changes.
Possible values:
0 = 'Set Duration at Start' ( Duration is set at the start of the ramp and remains constant )
1 = 'Update Duration Continuously' ( Duration can change if the signal value connected to the right inlet changes. )
The total duration of the ramp is recomputed to take into account the elapsed time. For example, if the original ramp was 1000 ms, and the duration signal changes to 1500 ms after the ramp has been going for 400 ms, the ramp will continue for another 1100 ms. In this case the slope of the ramp will be reduced so it takes the extra time to reach the end value.
reset [int]
Sets whether the value of the ramp resets to zero after it completes (reset equal to 1) or remains at the end value (reset equal to 0).
retrigger [int]
When retrigger is set to 0 (off), it is not possible to retrigger a ramp while a current ramp is in progress. When set to 1 (on), you may retrigger a ramp during the current ramp. The default value is 0 (off).
start [float]
Set the initial value of the ramp.
In third inlet: Sets the ramp start value
In right inlet: Sets the ramp end value
signal
A signal connected to the second inlet sets the ramp duration
A signal connected the third inlet sets the ramp start value
A signal connected the right inlet sets the ramp end value | https://docs.cycling74.com/max8/refpages/ramp~ | 2022-05-16T12:26:46 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.cycling74.com |
Call TracingCall Tracing
Magma supports basic traffic capture for troubleshooting purposes. Monitoring of control messaging flow and other traffic between the Magma access gateway and eNodeB devices is possible with this feature.
As of the time of writing, we are in the process of adding filtering for specific protocols and allowing custom options through tshark.
Currently there is a 5MiB size limit for call traces.
RequirementsRequirements
Ensure you have the following:
- a functional orc8r
- a configured LTE network
- a configured LTE gateway with eNodeB
- subscribers that can attach to your LTE gateway
- network running in un-federated mode
Initiating a Call TraceInitiating a Call Trace
The call tracing page can be accessed directly via the NMS sidebar.
Click
Start New Trace
trace_id must be a unique value among all the call traces on your network.
You must specify a
gateway_id to capture your call trace on. Call traces
capturing across multiple gateways simultaneously are not supported at this
time. If you wish to do so, start multiple call traces, one for each gateway.
Once the call trace is started, you can stop and/or download the call trace
on the same page. Under
Actions, click the vertical ellipsis button for your
call trace to see these options.
Viewing a Call TraceViewing a Call Trace
It is suggested that Wireshark is used to analyze call trace captures.
Additional ConfigurationAdditional Configuration
To do additional configuration for call traces, modify the
ctraced.yml file
on the access gateway.
This allows you to configure the following options:
- network interfaces to capture traffic on
- where trace files are stored on the access gateway
- tools used for traffic capture:
tsharkor
tcpdump
To specify the network interfaces to capture traffic on, add one line per
network interface for the property
trace_interfaces in the
ctraced.yml file
on the access gateway. Note that this modification must be done on each gateway
for which you wish to make this configuration.
To specify where trace files are stored on the access gateway, again modify
the
ctraced.yml file, with the
trace_directory property. It is recommended
that call trace files are not viewed directly from the access gateway, but
that they are downloaded through the NMS or API, and viewed through wireshark.
The last configuration option of note in
ctraced.yml is the
trace_tool
option. Two options are currently provided, for
tshark or
tcpdump. Most
call trace options are only available if
tshark is used.
API GuideAPI Guide
To view more detailed information on the API, see
magma/orc8r/cloud/go/services/ctraced/obsidian/models/swagger.v1.yml
It is recommended that call tracing is started, stopped, and downloaded through the API, or use of the NMS which uses the API. We provide the following endpoints:
Get all call tracesGet all call traces
GET /networks/{network_id}/tracing
Get info on a specific call traceGet info on a specific call trace
GET /networks/{network_id}/tracing/{trace_id}
Example response payload:
{ "config": { "gateway_id": "lte_gateway_1", "timeout": 300, "trace_id": "example_call_trace", "trace_type": "GATEWAY" }, "state": { "call_trace_available": false, "call_trace_ending": false } }
Start a new call traceStart a new call trace
POST /networks/{network_id}/tracing/{trace_id}
To start a new call trace, the payload must include the configuration of the call trace.
Example payload:
{ "gateway_id": "lte_gateway_1", "timeout": 300, "trace_id": "example_call_trace", "trace_type": "GATEWAY" }
There are four fields for a call trace configuration:
trace_id: The unique ID of the call trace.
trace_type: Currently only supported type is
GATEWAY, but we will provide
support for interface and subscriber filtered call traces.
gateway_id: The gateway ID of the access gateway on which to capture the call
trace.
timeout: The time in seconds after which the call trace will automatically
stop.
Stop a call traceStop a call trace
PUT /networks/{network_id}/tracing/{trace_id}
This API endpoint is used for updating call traces, but currently the only configurable option allows user control of ending a call trace.
An example request payload to stop a call trace:
{ "requested_end": true }
No other fields can currently be specified for updating a call trace, so this endpoint is only used for stopping call traces.
Delete a call traceDelete a call trace
DELETE /networks/{network_id}/tracing/{trace_id}
Download a call traceDownload a call trace
GET /networks/{network_id}/tracing/{trace_id}/download
Basic TroubleshootingBasic Troubleshooting
If you cannot get call tracing to work with the NMS, the API can be used directly.
Logs are available for the
ctraced service on both the orc8r and access
gateway, to view any error logs.
Additional design details are available in
p005_call_tracing.md. | https://docs.magmacore.org/docs/howtos/call_tracing | 2022-05-16T11:12:31 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.magmacore.org |
How to get your own Google Translate API key
To enable Auto translate for Inbox, you need to generate and connect your Google Translate API key to your account.
#Overview
- Create a New project in your Google Cloud account.
- Activate Google Translation API service
- Generate your API Key
- Restrict the API usage (optional)
#Step-by-step guide:
Go to Google Cloud Platform console and Login in with your Google account.
Create a
New Projectfrom the top menu bar. Give it a name and select
Create. You’ll reach your project dashboard.
Now click on the left hamburger button and go to
APIs & Services.
Enable APIS and SERVICESin the Dashboard.
Now search for
translateand click on the `Google Cloud Translation API1 result.
Click
Enable. This activates your Google Translation API service. Here after you’ve enabled API, you might be asked to enter the billing details if you haven’t done it already. You need a paid account to use Google’s Translation services.
Now to generate your API key, select the
create credentialsbutton from the screen. If you can’t find the button, Go to the
credentialsoption from your side menu bar.
Now click on the
Create credentialsdrop-down button and select the
API key.
Your API key would be displayed in a pop-up window. You can copy and paste this into your Inbox account.
You can also restrict your API key to prevent unauthorized or overuse of your account. There are two types of restriction - you can either restrict the API on where it is being used or you can restrict its usage quota. | https://docs.yellow.ai/docs/platform_concepts/inbox/google-translate-api-inbox/ | 2022-05-16T12:49:19 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.yellow.ai |
Blogging with Sandvox
Introduction
Blogging with Sandvox is very simple. create, a collection of pages, mainly Text.
To create a new blog:
- Select the Home Page in the Site Outline.
Click the "Collections" item in the toolbar and choose "Weblog."
- Open the Page Inspector and give the blog a suitable name by setting the "Page Title."
Congratulations, you now have a new blog! You should end up with something in the Site Outline like this:
You will probably also want the blog to appear in the Site Menu. If it's not already there:
To add the Blog to the Site Menu:
- Select the blog in the Site Outline.
- Open the Page Inspector.
- Check the "Include in site menu."
Adding Content to the Blog
Now your blog is all set up, it's time to add some content to it!
To add a new entry to the blog:
- Select the blog in the Site Outline.
Click the "Pages" item in the toolbar and select "Text Page."
You can now begin writing in this page to create your very first blog entry. Don't forget that you can easily add Embedded Images or Callouts to the page's content.
Publishing the Blog
If you have already set up your site for publishing then all you need do is click the "Publish" item in the toolbar.
If not, you will have to set up your Host as described in "Setting Up Your Host." Then click the "Publish" toolbar item.
RSS Feed
Of course, half the point of a blog is that people can subscribe to it. This is done through an RSS feed. Sandvox automatically creates and maintains a blog's RSS feed.
You want people to easily discover the RSS feed for your blog. blog.
To add an RSS Badge to a blog:
- Select the blog in the Site Outline.
- Click the "Pagelets" item in the toolbar and select "RSS Badge."
- Open the Selection Inspector and customize the new RSS Badge as you like. or Movie page instead?
Don't forget that you can also easily create such pages by dragging a photo or movie (from the Media Browser or another application such as the Finder) into the Site Outline. Once you've created the page, click the green Editing Marker
to add some text to it.
If you're feeling really adventurous, (and you have Sandvox Pro), you could even experiment with a Raw HTML blog post!
Blog Centered Sites
Of course, you may not want the blog to simply be a part of your site, but actually the center of it. Sandvox makes it easy to do this, since the Home Page itself is actually a Collection.
To turn your Home Page into a blog:
- Select the Home Page in the Site Outline.
- Open the Page Inspector.
From the "Collection: popup, select "Weblog."
Now, when you want to add a post to your blog, simply add it to the Home Page itself rather than a separate dedicated Collection.
Why not enable comments on your blog? Comments allow your readers to leave behind their thoughts on items you post.
When you first create the blog, Sandvox automatically prepares it for comments. All you have to do is enter your Haloscan ID into the Site Properties Outline and open the Page Inspector.
- If necessary, click the triangle at the bottom of the Inspector to disclose the full collection settings.
Set the "Items" box to a more appropriate number.
An alternative is to truncate the summary text of each article so that it takes up less space. Visitors can then click on an item in the blog to be taken to the full article:
To set the truncation of blog summaries:
- Select the blog in the Site Outline and open the Page Inspector.
- If necessary, click the triangle at the bottom of the Inspector to disclose the full collection settings.
Enter the number of characters you wish to limit each summary to. also change the format the date is displayed in.
For information on how to set this, please see the "Dates & Times" section of the "Site Inspector" article.
Blog Maintenance
Sandvox maintains its blog entirely from the site document, so there is no way to remotely update your blog remotely unless you have your Mac and your Sandvox document with you. Moreover, Sandvox does not integrate with other blogging tools such as WordPress or Blogger; they maintain their own database on their servers, and are not Mac-based. | http://docs.karelia.com/z/Blogging_with_Sandvox.html | 2008-05-16T04:58:41 | crawl-001 | crawl-001-006 | [array(['img/b/b6/Blog_in_Site_Outline.png', 'Blog in Site Outline'],
dtype=object)
array(['img/7/7a/Add_Photo_Page.png', 'Add Photo Page'], dtype=object)] | docs.karelia.com |
There service is a special kind of Targeted Chain in which the pivot Handler is known as a "provider".
Need to explain how "FaultableHandlers" and "WSDD Fault Flows" fit in.
The EngineConfiguration interface belongs to the Message Flow subsystem which means that the Message Flow subsystem does not depend on the Administration subsystem..
The structure of the WSDD grammar is mirrored by a class hierarchy of factories
for runtime artefacts.
The following diagram shows the classes and the types of runtime
artefacts they produce (a dotted arrow means "instantiates").:
There are three layers inside the tool:. | http://docs.pushtotest.com/LibJavadoc/axis-1.4/docs/architecture-guide.html | 2008-05-16T04:59:58 | crawl-001 | crawl-001-006 | [] | docs.pushtotest.com |
Recommended Reading
Here are things you can read to understand and use Axis better. Remember, you also have access to all the source if you really want to find out how things work (or why they don't).
Axis installation, use and internals
Web Services with JAX-RPC and Apache Axis.
by Pankaj Kumar. Starting with a 10000 ft. view of Web Services, prior technologies, current and emerging standards, it quickly gets into the nitty-gritties of using JAX-RPC and Apache Axis for writing and executing programs. Has a nice coverage of different invocation styles - generated stubs, dynamic proxy and dynamic invocation interface. A good place to start if you are new to Web Services and Axis.
The author also maintains a
Web Services Resource Page
.
Apache Axis SOAP for Java
Dennis Sosnoski covers Axis. This is another good introductory guide.
Enabling SOAPMonitor in Axis 1.0
.
Dennis Sosnoski on how to turn the SOAP monitor on and off, and use it to log your application.
Axis in JRrun
Macromedia authored coverage of using Axis from inside JRun.
Ask the magic eight ball
Example of using an Axis service with various caller platforms/languages.
Configure Axis Web Services
Kevin Jones talks a bit about configuring axis, showing how to return handwritten WSDL from the ?wsdl query.
Different WSDL Styles in Axis
Kevin Jones looks at the document and wrapped styles of WSDL2Java bindings.
Web Services Security with Axis
This sample chapter from
J2EE Security for Servlets, EJBs and Web Services
book explains use of username/password based authentication, SSL and Servlet security mechanisms for securing the transport and WS-Security for securing the messages with Apache Axis. To illustrate use of handlers for enforcing security, it describes the implementation of a bare-bones WS-Security mechanism using
VeriSign's TSIK
and JAX-RPC handlers. You can also
view
or
download
the complete sources discussed in the chapter.
Specifications
SOAP Version 1.1
Remember that SOAP1.1 is not an official W3C standard.
SOAP Version 1.2 Part 0: Primer
This and the follow-on sections cover what the W3C think SOAP is and how it should be used.
Web Services Description Language (WSDL) 1.1
RFC 2616: Hypertext Transfer Protocol -- HTTP/1.1
This is HTTP. You really do need to understand the basics of how this works, to work out why your web service doesn't :)
SOAP with Attachments API for Java (SAAJ)
SAAJ enables developers to produce and consume messages conforming to the SOAP 1.1 specification and SOAP with Attachments note.
Java API for XML-Based RPC (JAX-RPC)
The public API for Web Services in Java. JAX-RPC enables Java technology developers to develop SOAP based interoperable and portable web services. JAX-RPC provides the core API for developing and deploying web services on the Java platform.
XML Schema Part 0: Primer
The W3C XML Schema, (WXS) is one of the two sets of datatype SOAP supports, the other being the SOAP Section 5 datatypes that predate WXS. Complicated as it is, it is useful to have a vague understanding of this specification.
Java API for XML Messaging (JAXM)
JAXM.
Explanations, articles and presentations
A Gentle Introduction to SOAP
Sam Ruby tries not to scare people.
A Busy Developer's Guide to WSDL 1.1
Quick intro to WSDL by the eponymous Sam Ruby.
Axis - an open source web service toolkit for Java
by Mark Volkmann, Partner, Object Computing, Inc. A very good introduction to SOAP and Axis. Highly Recommended.
When Web Services Go Bad
Steve Loughran tries to scare people. A painful demonstration how deployment and system management are trouble spots in a production service, followed by an espousal of a deployment-centric development process. Remember, it doesn't have to be that bad.
JavaOne 2002, Web Services Today and Tomorrow
(Java Developer connection login required)
The Java Web Services Tutorial: Java API for XML-based RPC
This is part of Sun's guide to their Java Web Services Developer Pack. The examples are all based on their JWSDP, but as Axis also implements JAX-RPC, they may all port to Axis.
Using Web Services Effectively.
Blissfully ignoring issues such as versioning, robustness and security and all the other details a production Web Service needs, instead pushing EJB as the only way to process requests, this is Sun's guide to using web services in Java. It also assumes Java is at both ends, so manages to skirt round the interop problem.
Making Web Services that Work
A practical but suspiciously code free paper on how to get web services into production. As well as coverage of topics such as interop, versioning, security, this (57 page) paper looks at the deployment problem, advocating a fully automated deployment process in which configuration problems are treated as defects for which automated test cases and regresssion testing are appropriate. Happyaxis.jsp is the canonical example of this. The author, Steve Loughran also looks a bit at what the component model of a federated web service world would really be like.
Interoperability
To infinity and beyond - the quest for SOAP interoperability
Sam Ruby explains why Interop matters so much.
The Wondrous Curse of Interoperability
Steve Loughran on interop challenges (especially between .NET and Axis), and how to test for them.
Advanced topics
Requirements for and Evaluation of RMI Protocols for Scientific Computing
Architectural Styles and the Design of Network-based Software Architectures
The theoretical basis of the REST architecture
Investigating the Limits of SOAP Performance for Scientific Computing
Architectural Principles of the World Wide Web
The W3C architects say how things should be done.
Books
Beginning Java Web Services).
Building Web Services with Java: Making Sense of XML, SOAP, WSDL and UDDI.
J2EE Security for Servlets, EJBs and Web Services.
Authors, publishers: we welcome additions to this section of any books which have some explicit coverage of Axis. Free paper/pdf copies and other forms of bribery accepted.
External Sites covering Web Services.
MSDN Web Services
These are the microsoft pages; little Axis coverage but plenty on Web Service specifications.
WebServices.xml.com
The O'Reilly site is always up to date, interesting and useful. It doesn't advocate a single technology (REST, SOAP, RDF...), or a single product. As such, it retains its independence and value. | http://docs.pushtotest.com/LibJavadoc/axis-1.4/docs/reading.html | 2008-05-16T05:00:46 | crawl-001 | crawl-001-006 | [] | docs.pushtotest.com |
Tests that the asset files for all core libraries exist.
This test also changes the active theme to each core theme to verify the libraries after theme-level libraries-override and libraries-extend are applied.
Asset
Gets all libraries for core and all installed modules.
References Drupal\moduleHandler(), and Drupal\root().
{}
Ensures that all core module and theme library files exist.
Checks that all the library files exist.
References Drupal\root(). | http://drupal8docs.diasporan.net/d2/db8/class_drupal_1_1system_1_1_tests_1_1_asset_1_1_resolved_library_definitions_files_match_test.html | 2017-06-22T22:15:11 | CC-MAIN-2017-26 | 1498128319912.4 | [] | drupal8docs.diasporan.net |
<<
- We indicate that we want to recover the system with the command “rear recover” and the following variables SERVER=”DRLM Server Ip” REST_OPTS=-k ID=”Rear Client Host Name”, in our case “rear recover SERVER=192.168.2.120 REST_OPTS=-k ID=fosdemcli4”
- The system is recovering.
- System recovered! So we only have to restart the client.
| http://docs.drlm.org/en/2.1.2/Restore.html | 2017-06-22T22:13:52 | CC-MAIN-2017-26 | 1498128319912.4 | [array(['_images/RecoverImage1_v2.png', '_images/RecoverImage1_v2.png'],
dtype=object)
array(['_images/RecoverImage2.jpg', '_images/RecoverImage2.jpg'],
dtype=object)
array(['_images/RecoverImage3.jpg', '_images/RecoverImage3.jpg'],
dtype=object)
array(['_images/RecoverImage4.jpg', '_images/RecoverImage4.jpg'],
dtype=object)
array(['_images/RecoverImage5.jpg', '_images/RecoverImage5.jpg'],
dtype=object) ] | docs.drlm.org |
veraPDF Validation
The veraPDF validation engine implements the PDF/A specification using formalizations of each “shall” statement (i.e., each requirement) in PDF/A-1, PDF/A-2 and PDF/A-3. These rules are implemented as XML documents known as Validation Profiles and are applied by veraPDF software at runtime.
Read a little more about the veraPDF PDF/A validation model, rules and profiles.
Validation rules
These pages distinctly identify each rule used by the software and provides details on the error(s) triggering a failure of the rule.
For each error the Object type, test condition, applicable specification and conformance level, as well as additional references, are provided.
Understandings based on the discussions of the PDF Validation Technical Working Group are included as appropriate.
Rules for PDF/A-2 and PDF/A-3. | http://docs.verapdf.org/validation/ | 2017-06-22T22:03:46 | CC-MAIN-2017-26 | 1498128319912.4 | [] | docs.verapdf.org |
Constructs a new updater.
References Drupal\root().
Returns an Updater of the appropriate type depending on the source.
If a directory is provided which contains a module, will return a ModuleUpdater.
Referenced by UpdateReady\submitForm(), and UpdateManagerInstall\submitForm().
Determines.
References drupal_basename(), file_scan_directory(), and Unicode\substr().
Returns the full path to a directory where backups should be written.
Referenced by Updater\getInstallArgs().
Get Extension information from directory.
References Drupal\service(), and t().
Stores the default parameters for the Updater.
References Updater\getBackupDir().
Referenced by Updater\install(), and Updater\update().
Gets the name of the project directory (basename).
References drupal_basename().
Returns the project name from a Drupal info file.
References Drupal\service(), and t().
Referenced by UpdateManagerInstall\submitForm(), and UpdaterTest\testGetProjectTitleWithChild().
Determines which Updater class can operate on the given directory.
Referenced by UpdateUploadTest\testUpdateDirectory().
Installs a Drupal project, returns a list of next actions.
References Updater\getInstallArgs(), Updater\makeWorldReadable(), Updater\postInstall(), Updater\postInstallTasks(), Updater\prepareInstallDirectory(), and t().
Performs a backup.
Referenced by Updater\update().
Ensures that a given directory is world readable.
Referenced by Updater\install(), Updater\prepareInstallDirectory(), and Updater\update().
Performs actions after installation.
Referenced by Updater\install().
Returns an array of links to pages that should be visited post operation.
Referenced by Updater\install().
Performs actions after new code is updated.
Referenced by Updater\update().
Returns an array of links to pages that should be visited post operation.
Referenced by Updater\update().
Makes sure the installation parent directory exists and is writable.
References Updater\makeWorldReadable(), and t().
Referenced by Updater\install(), and Updater\update().
Updates a Drupal project and returns a list of next actions.
References Updater\getInstallArgs(), Updater\makeBackup(), Updater\makeWorldReadable(), Updater\postUpdate(), Updater\postUpdateTasks(), Updater\prepareInstallDirectory(), and t(). | http://drupal8docs.diasporan.net/da/d27/class_drupal_1_1_core_1_1_updater_1_1_updater.html | 2017-06-22T22:13:10 | CC-MAIN-2017-26 | 1498128319912.4 | [] | drupal8docs.diasporan.net |
Pipeline¶
Naturally, the pipeline.Pipeline class implements the processing pipeline.
Validator registration¶
Register by constructor¶
The pipeline.Pipeline constructor takes a validators keyword argument, which is a list of validators to run in the pipeline.
Each value in the validators list is expected to be a string describing the path to a validator class, for import via importlib.
Optionally, for builtin validators, the validator.name property can be used as a shorthand convenience.
Example
- ::
- validators = [‘structure’, ‘schema’] # short hand names for builtin validators validators = [‘my_module.CustomValidatorOne’, ‘my_module.CustomValidatorTwo’] # import from string validators = [‘structure’, ‘my_module.CustomValidatorTwo’] # both combined
Register by instance method¶
Once you have a pipeline.Pipeline instance, you can also register validators via the register_validator method.
Registering new validators this way will by default append the new validators to any existing pipeline.
You can define the position in the pipeline explicitly using the position argument.
Example
- ::
- pipeline = Pipeline(args, kwargs) pipeline.register_validator(‘structure’, structure_options) pipeline.register_validator(‘spec’, spec_options, 0)
Validator options¶
Pipeline takes an options keyword argument to pass options into each validator in the pipeline.
options should be a dict, with each top-level key being the name of the validator.
Example
- ::
- pipeline_options = {
- ‘structure’: {
- # keyword args for the StructureValidator
}, ‘schema’: {# keyword args for the SchemaValidator
}
}
Running the pipeline¶
Run the pipeline with the run method.
run in turn calls the supported validator methods of each validator.
Once the data table has been run through all validators, run returns a tuple of valid, report, where:
- valid is a boolean, indicating if the data table is valid according to the pipeline validation
- report is tellme.Report instance, which can be used to generate a report in various formats
Validator arguments¶
Most validators will have custom keyword arguments for their configuration.
Additionally, all validators are expected to take the following keyword arguments, and exhibit certain behaviour based on their values.
The base.Validator signature implements these arguments.
fail_fast¶
fail_fast is a boolean that defaults to False.
If fail_fast is True, the validator is expected to stop processing as soon as an error occurs.
transform¶
transform is a boolean that defaults to True.
If transform is True, then the validator is “allowed” to return transformed data.
The caller (e.g., the pipeline class) is responsible for persisting transformed data.
report_limit¶
report_limit is an int that defaults to 1000, and refers to the maximum amount of entries that this validator can write to a report.
If this number is reached, the validator should stop processing.
row_limit¶
row_limit is an int that defaults to 20000, and refers to the maximum amount of rows that this validator will process.
Validator attributes¶
Validators are also expected to have the following attributes.
A tellme.Report instance. See TellMe
Validators are expected to write report entries to the report instance.
pipeline.Pipeline will call validator.report.generate for each validator to build the pipeline report. | http://goodtables.readthedocs.io/en/latest/pipeline.html | 2017-06-22T22:06:22 | CC-MAIN-2017-26 | 1498128319912.4 | [] | goodtables.readthedocs.io |
ceph-fuse ceph-fuse process.
Any options not recognized by ceph-fuse will be passed on to libfuse.
Detach from console and daemonize after startup.
Use ceph.conf configuration file instead of the default /etc/ceph/ceph.conf to determine monitor addresses during startup.
Connect to specified monitor (instead of looking through ceph.conf).
Use root_directory as the mounted root, rather than the full Ceph tree.
ceph-fuse is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to the Ceph documentation at for more information. | http://docs.ceph.com/docs/master/man/8/ceph-fuse/ | 2017-06-22T22:16:53 | CC-MAIN-2017-26 | 1498128319912.4 | [] | docs.ceph.com |
Thumbnail Files and Generators¶
Following is some basic documentation of the classes and methods related to thumbnail files and lower level generation.
- class easy_thumbnails.files.ThumbnailFile(name, file=None, storage=None, thumbnail_options=None, *args, **kwargs)¶
A thumbnailed file.
This can be used just like a Django model instance’s property for a file field (i.e. an ImageFieldFile object).
- image¶
Get a PIL Image instance of this file.
The image is cached to avoid the file needing to be read again if the function is called again.
- set_image_dimensions(thumbnail)¶
Set image dimensions from the cached dimensions of a Thumbnail model instance.
- class easy_thumbnails.files.Thumbnailer(file=None, name=None, source_storage=None, thumbnail_storage=None, remote_source=False, generate=True, *args, **kwargs)¶
A file-like object which provides some methods to generate thumbnail images.
You can subclass this object and override the following properties to change the defaults (pulled from the default settings):
- source_generators
- thumbnail_processors
- generate_thumbnail(thumbnail_options, high_resolution=False, silent_template_exception=False)¶
Return an unsaved ThumbnailFile containing a thumbnail image.
The thumbnail image is generated using the thumbnail_options dictionary.
- get_existing_thumbnail(thumbnail_options, high_resolution=False)¶
Return a ThumbnailFile containing an existing thumbnail for a set of thumbnail options, or None if not found.
- get_options(thumbnail_options, **kwargs)¶
Get the thumbnail options that includes the default options for this thumbnailer (and the project-wide default options).
- get_thumbnail(thumbnail_options, save=True, generate=None, silent_template_exception=False)¶
Return a ThumbnailFile containing a thumbnail.
If a matching thumbnail already exists, it will simply be returned.
By default (unless the Thumbnailer was instanciated with generate=False), thumbnails that don’t exist are generated. Otherwise None is returned.
Force the generation behaviour by setting the generate param to either True or False as required.
The new thumbnail image is generated using the thumbnail_options dictionary. If the save argument is True (default), the generated thumbnail will be saved too.
- get_thumbnail_name(thumbnail_options, transparent=False, high_resolution=False)¶
Return a thumbnail filename for the given thumbnail_options dictionary and source_name (which defaults to the File’s name if not provided).
- save_thumbnail(thumbnail)¶
Save a thumbnail to the thumbnail_storage.
Also triggers the thumbnail_created signal and caches the thumbnail values and dimensions for future lookups.
- source_generators = None¶
A list of source generators to use. If None, will use the default generators defined in settings.
- thumbnail_exists(thumbnail_name)¶
Calculate whether the thumbnail already exists and that the source is not newer than the thumbnail.
If the source and thumbnail file storages are local, their file modification times are used. Otherwise the database cached modification times are used.
- class easy_thumbnails.files.ThumbnailerFieldFile(*args, **kwargs)¶
A field file which provides some methods for generating (and returning) thumbnail images.
- class easy_thumbnails.files.ThumbnailerImageFieldFile(*args, **kwargs)¶
A field file which provides some methods for generating (and returning) thumbnail images.
- easy_thumbnails.files.database_get_image_dimensions(file, close=False, dimensions=None)¶
Returns the (width, height) of an image, given ThumbnailFile. Set ‘close’ to True to close the file at the end if it is initially in an open state.
Will attempt to get the dimensions from the file itself if they aren’t in the db.
- easy_thumbnails.files.generate_all_aliases(fieldfile, include_global)¶
Generate all of a file’s aliases.
- easy_thumbnails.files.get_thumbnailer(obj, relative_name=None)¶
Get a Thumbnailer for a source file.
The obj argument is usually either one of the following:
- FieldFile instance (i.e. a model instance file/image field property).
- A string, which will be used as the relative name (the source will be set to the default storage).
- Storage instance - the relative_name argument must also be provided.
Or it could be:
A file-like instance - the relative_name argument must also be provided.
In this case, the thumbnailer won’t use or create a cached reference to the thumbnail (i.e. a new thumbnail will be created for every Thumbnailer.get_thumbnail() call).
If obj is a Thumbnailer instance, it will just be returned. If it’s an object with an easy_thumbnails_thumbnailer then the attribute is simply returned under the assumption it is a Thumbnailer instance) | http://easy-thumbnails.readthedocs.io/en/2.1/ref/files/ | 2017-06-22T22:05:28 | CC-MAIN-2017-26 | 1498128319912.4 | [] | easy-thumbnails.readthedocs.io |
A frequently asked question in the technology industry is whether one should favor appliance based solutions (hardware or virtual) or software-based solutions (which need to be installed and configured); a valid question as products in the same category often take these two different approaches. While the cost, performance, maintenance and support for these approaches are similar, the differences in security are often a source of concern.
Appliance advantages
Anything running on a host can be considered a potential security risk. If a component is not actually required then it is safer not to install it. Vulnerabilities in the many tools and utilities installed and running in a default installation of an OS are known and exploited. The appliance approach provides a tightly controlled system in which only the essential tools and utilities are installed. These tools and utilities, including the Red Hat Enterprise Linux (RHEL) 6 OS, are hardened to allow only authorized access and ensure the integrity of the system. See Appliance hardening for more information.
Security updates
A considerable advantage of the appliance approach over a software solution is a known and understood system in which the interaction between components is designed and knowingly limited to that design. When patches to the RHEL OS are released, BMC Software check to see whether they are appropriate to the appliance. Many are inappropriate due to the subset of packages used in the appliance. Where a patch is appropriate it is tested and rolled into the next available OS upgrade, or product release; urgent updates are released as a Hot Fix.
BMC provides regular upgrades to the BMC Discovery OS each month; each upgraded package is checked to see whether they are appropriate to the appliance. See OS upgrades for more information.
Do not download and apply Red Hat OS patches
It is most important that OS patches released by Red Hat are not downloaded and applied to the appliance; this might result in reduced rather than enhanced security. For example, a patch might reinitialize a service, modify security configurations, or change kernel parameters, all of which can cause unexpected behavior.
Patch versioning
Red Hat does not increment the base version of any of the packages until the whole release is incremented. Instead they continuously apply security patches. This means simpler security scanners can report false positives as they only look at the base version of the packages.
Software-based solutions
In contrast, software-based solutions are generally installed on servers that are supplied by customers. This approach has advantages as it provides the customer full control over how to implement, configure and support the solution. However it includes several aspects to consider which can impact the security of the system. Vendors often specify a minimum set of OS packages that are required to support the software-based solutions, placing customers in the difficult position of choosing what is needed versus what is not. Not only does this allow potential security vulnerabilities, it also makes the task of hardening the system far more complicated.
Finally, the more packages there are on servers, the more security patches a company must monitor to secure these servers. Since the customer generally provides the server, the burden of monitoring security patches falls on the customers. | https://docs.bmc.com/docs/display/DISCO110/An+appliance-based+solution | 2017-06-22T22:14:33 | CC-MAIN-2017-26 | 1498128319912.4 | [] | docs.bmc.com |
Hello All,
We would like to inform you that the Mofluid iOS app for Magento1 has been updated to release version 1.17.17(AppStore Version 20.58).
The highlighted features in Mofluid-1 iOS App are as follows –
1. Product custom option on PDP(UI, functionality, and validation).
2. Added Grouped product.
3. Implement RTL on Grouped PDP
4. Added “CCAvanue” payment gateway.
5. Coupon price issue resolve.
6. Cart Page same like android.
7. TabBar swipe issue resolve.
8. Banner issue (When we change to store the banner is not show) -Done | http://docs.mofluid.com/release-notes-1-17-17magento-1-ios-version-28th-april-2017/ | 2017-06-22T22:16:12 | CC-MAIN-2017-26 | 1498128319912.4 | [] | docs.mofluid.com |
xx-disconnect-clients-on-ixn-server-disconnect
Section: statserver
Default Value: no
Valid Values: yes, no
Changes Take Effect: After restart
Controls whether Stat Server disconnects all clients—including voice clients—upon receiving notification of disconnection from Interaction Server.
In large environments, setting this option to yes enables Stat Server to handle more efficiently Interaction Server disconnections by ceasing to open new statistics from Stat Server clients—a time-consuming operation in very large environments. It is assumed that you will perform the reconnection after the Interaction Server disconnect has been resolved.
InteractionAbandoned (Tenants)
This retrospective action is unconditionally generated on a Tenant upon receiving the EventAbandoned.
For more information about Stat Server actions, see InteractionAbandoned (Tenants).
qos-recovery-enable-lms-messages
Section: overload
Default Value: false
Valid Values: true, false
Changes Take Effect: After restart
Introduced: 8.5.108
Enables Standard recovery related log messages, which are introduced for debugging purpose:
10072 “GCTI_SS_OVERLOAD_RECOVERY_STARTED - Overload recovery started on %s (%d current CPU usage)”
10073 “GCTI_SS_OVERLOAD_RECOVERY_FAILED - Overload recovery failed on %s (%d current CPU usage)”.
qos-default-overload-policy
Section: overload
Default Value: 0
Valid Values: 0, 1, 2
Changes Take Effect: After restart
Introduced: 8.5.108
Defines the global overload policy.
If this option is set to:
- 0 (zero) - sends and updates for requested statistics can be cut
- 1 - only sends of statistics to Stat Server clients can be cut
- 2 - nothing can be cut. Stat Server updates and sends all requested statistics.
protection
Section: overload
Default Value: false
Valid Values: true, false
Changes Take Effect: Immediately
Introduced: 8.5.108
Controls whether the overload protection is applied during the Stat Server overload.
cut-debug-log
Section: overload
Default Value: true
Valid Values: true, false
Changes Take Effect: Immediately
Introduced: 8.5.108
Controls debug logging in the overload. If set to true, the debug log is cut during the Stat Server overload.
cpu-threshold-low
Section: overload
Default Value: 60
Valid Values: 0-100
Changes Take Effect: After restart
Introduced: 8.5.108
Defines the lower level of the main thread CPU utilization threshold, which signifies the start of the Stat Server recovery.
cpu-threshold-high
Section: overload
Default Value: 80
Valid Values: 0-100
Changes Take Effect: After restart
Introduced: 8.5.108
Defines the higher level of the main thread CPU utilization threshold, which signifies the start of the Stat Server overload.
cpu-poll-timeout
Section: overload
Default Value: 10
Valid Values: 1-60
Changes Take Effect: After restart
Introduced: 8.5.108
Defines, in seconds, how often the main thread CPU is polled.
cpu-cooldown-cycles
Section: overload
Default Value: 30
Valid Values: 1-100
Changes Take Effect: After restart
Introduced: 8.5.108
Defines the number of cpu-poll-timeout cycles in a cooldown period.
For example, if the cpu-poll-timeout = 10sec and cpu-cooldown-cycles = 30, then the cooldown period is 10x30 =300sec. It means that the main thread CPU should be below the value of the cpu-threshold-low option for 300sec, after this period overload recovery is considered to be over.
allow-new-requests-during-overload
Section: overload
Default Value: true
Valid Values: true, false
Changes Take Effect: Immediately
Introduced: 8.5.108
Controls whether new requests can be made during the Stat Server overload.
allow-new-connections-during-overload
Section: overload
Default Value: true
Valid Values: true, false
Changes Take Effect: Immediately
Introduced: 8.5.108
Controls whether new clients can connect during the Stat Server overload.
8.5.108.15
Stat Server Release Notes
Helpful Links
Releases Info
Product Documentation
Stat Server, Real-Time Metrics Engine
Genesys Products
What's New
This release contains the following new features and enhancements:
- Stat Server supports the overload protection method of reducing CPU consumption.
- The following new configuration options are added in the [overload] section:
- Stat Server supports the new DynamicOverloadPolicy option in the [<stat type>] section.
- Stat Server supports the retrospective InteractionAbandoned action for Tenants.
- Stat Server supports Windows Server 2016 and Windows Server 2016 Hyper-V. See the Supported Operating Environment: Framework page for more detailed information and a list of all supported operating systems.
Resolved Issues
This release contains the following resolved issues:
Stat Server now correctly calculates the following statistic:
Category=Formula
Expression=@MAX(CallWait; duration; <user-data dependent expression> )
Objects=Queue
Subject=DNAction
(SS-7896)
Stat Server now correctly calculates filtered Custom Formula statistics, which contain current sub-aggregates (such as @SUM), when Subject=DNStatus. (SS-7881, SS-7880)
Stat Server no longer incorrectly converts non call related ACW to call related ACW upon receiving the EventAddressInfo event, when there are a call and a non call-related ACW in progress. (SS-7874)
Stat Server no longer logs the wrong "Stat Server has removed a stuck call" diagnostic message upon receiving the EventAddressInfo event on a regular DN, when there is a UserEventReceived action in progress on that DN with non-empty ConnID. (SS-7871)
Stat Server now correctly applies the xx-disconnect-clients-on-ixn-server-disconnect option, disconnecting its clients before handling the EventAgentLogout, upon Interaction Server disconnect. (SS-7870)
Stat Server now correctly calculates time-sensitive Custom Formula statistics with Subject=DNStatus|AgentStatus|PlaceStatus. (SS-7834)
Stat Server no longer intermittently terminates unexpectedly on Linux 7 when trying to load multiple Java Extensions at startup. (SS-7799)
Stat Server now reads time-zone objects from the Environment tenant only. Previously, Stat Server read time-zones from all assigned tenants while it only used time-zones that are configured on the Environment tenant. (SS-7710)
Stat Server now correctly evaluates current non-incrementally computable Custom Formula statistics, aggregating on action kv-lists. (SS-7701)
Upgrade Notes
No special procedure is required to upgrade to release 8.5.108.15.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/RN/latest/stat-svr85rn/stat-svr8510815 | 2018-11-13T03:29:48 | CC-MAIN-2018-47 | 1542039741192.34 | [] | docs.genesys.com |
Order of Migration
Except where noted, the following tasks apply to all migration procedures:
1. Check prerequisites
Ensure all of the prerequisites required for the WFM version you are installing are met within your environment. See Migration Prerequisites.
2. Plan for down time
The database upgrade might run a long time, so a period of down-time can occur. Take this into account when scheduling your database migration.
Minimizing down time for 6.5 migration
If you are migrating a large 6.5 data set, you can minimize your data collection downtime by using the Procedure: Two-Step Migration. Consult with Genesys Professional Services or Genesys Customer Care if you need recommendations for how best to plan and ensure your existing data is migrated into your environment.
3. Disable access to WFM Web
Disable access to WFM by either redirecting traffic to an "under construction" page or stop the WFM service.
4. Stop WFM Server components
See Starting and Stopping WFM in the Workforce Management Administrator's Guide.
5. Back up your database
Genesys recommends using a Database Management System (DBMS) to back up your database before beginning your migration or update.
Starting in 8.5.1, the WFM Backup-Restore Utility (BRU) is included in the WFM Database Utility (DBU) Installation Package (IP). Unlike the previously used WFM DBU backup file (.MDB format), which has a maximum 2 GB file size limit, the BRU uses a backup file format (.DB) that has no file size limit. For more information about the BRU, see Using the Backup-Restore Utility in the Workforce Management Administrator's Guide.
6. Migrate WFM
Complete one of the following procedures, depending on which release you are migrating to WFM 8.5.x:
- Procedure: Migrating WFM 8.x or higher (Tomcat)
- Procedure: Migrating WFM 8.x or higher (WebSphere)
- Procedure: Migrating the Database Using the BRU
- Procedure: Migrating WFM 7.x or higher (Tomcat)
- Procedure: Migrating WFM 7.x or higher (WebSphere)
- Procedure: Migrating WFM 6.5
If you are currently running a version of WFM earlier than 6.5.201.00, the WFM Database Utility automatically updates your existing database to 6.5.201.00 while migrating your data to the 7.x database. Doing so changes the original database structure in such a way that you can no longer use your original database in your existing environment.
If you have Workforce Management 6.5, the WFM Database Utility automatically performs the additional upgrades required before your data can be transferred to the new Workforce Management 7.x Database.
7. Uninstall WFM and deploy the new version
See Installing and Uninstalling WFM Components in the Workforce Management Administrator's Guide.
8. Verify your connections and settings
If you experience any connectivity issues immediately after any migration or update, do the following:
- Verify that you have the correct connections specified on the Connections tab of the Application object for each component.
- In WFM Web’s Organization module, update the following:
- Data Aggregator Name, Tenant, Password, and Time Profile for each Business Unit object.
- Data Aggregator Name, Tenant, Password, and Time Profile for each Site object.
- WFM Server for each Site object.
- Assign or unassign Agent (depending on login detected) for each Site object.
- In WFM Web’s Users module, update the Time Zone, WFM Builder object, and Role for each user.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/WM/latest/Migrate/MigrOrd | 2018-11-13T02:37:53 | CC-MAIN-2018-47 | 1542039741192.34 | [] | docs.genesys.com |
Headless Repositories¶
This document describes the architecture and state of Mozilla’s headless repositories.
History Lesson¶
For the longest time, Mozilla operated a special Mercurial repository
called
Try. When people wanted to schedule a build or test job
against Firefox’s release automation, they would create a special
Mercurial commit that contained metadata on what jobs to schedule. They
would push this commit (and its ancestors) to a new head on the
Try
repository. Firefox release automation would continuously poll the
repository via the pushlog data and would schedule jobs for new
pushes / heads according to the metadata in the special commit.
This approach was simple and worked for a long time without significant issues. Unfortunately, Mercurial (and Git) have known scaling problems as the number of repository heads approaches infinity. For Mozilla, we start to encounter problems after a few thousand heads. Things started to get really bad after 10,000 heads or so.
While fixing Mercurial to scale to thousands of heads would be admirable, after talking with Mercurial core developers, it was apparent that this would be a lot of work and the success rate was not considered high, especially as we started talking about scaling to 1 million or more heads. The recommended solution was to avoid the mega-headed scaling problem alltogether and to limit ourselves to a bound number of heads.
Headless Repositories¶
A headless repository is conceptually a single repository with thousands, but with repository data stored outside the repository itself.
Clients still push to the repository as before. However, special code on the server intercepts the incoming data and siphons it off to an external store. A pointer to this external data is stored, allowing the repository to serve up this data to clients that request it.
Mozilla plans to use headless repositories for Try, which share similar models of many clients writing to a central server with limited, well-defined clients for that data.
Technical Details¶
A Mercurial extension on the push server will intercept incoming changegroup data and write a Mercurial bundle of that data to S3. This is tracked in.
A relational database will record information on each bundle - the URL, what changesets it contains, etc. This database will be written to as part of push by the aforementioned Mercurial extension. This is tracked in.
A Mercurial extension on the hgweb servers will serve requests for S3-backend changesets. Clients accessing the server will be able to request data in S3 as if it is hosted in the repository itself. This is tracked in.
The hgweb servers will also expose an HTTP+JSON API that matches the existing pushlog API in order to allow clients to poll for new changes without having to change their client-side code.
Initially, a one-off server to run the headless repositories will be created. It will have one-off Mercurial versions, software stack, etc. We may revisit server topology once things are rolled out and proved. This is tracked in.
Clients that pull Try data will need to either upgrade to Mercurial 3.2 or install a custom extension that facilitates following links to S3 bundles. This is because we plan to use Mercurial’s bundle2 exchange format and a feature we want to use is only available in Mercurial 3.2.
Low-Level Details¶
- Client performs hg push ssh://hg.mozilla.org/try
- Mercurial queries remote and determines what missing changesets needs to be pushed.
- Client streams changeset data to server.
- Server applies public changesets to the repository and siphons draft changesets into a new bundle.
- Public changesets are committed to the repository. Draft changesets are uploaded to S3 in a bundle file.
- Server records metadata of S3-hosted files and push info into database. | https://mozilla-version-control-tools.readthedocs.io/en/latest/headless-repositories.html | 2018-11-13T03:41:45 | CC-MAIN-2018-47 | 1542039741192.34 | [] | mozilla-version-control-tools.readthedocs.io |
Glossary¶
- config uri
- In most cases this is simply an absolute or relative path to a config file on the system. However, it can also be a RFC 1738-style string pointing at a remote service or a specific parser without relying on the file extension. For example,
my-ini://foo.inimay point to a loader named
my-inithat can parse the relative
foo.inifile.
- loader
- An object conforming to the
plaster.ILoaderinterface. A loader is responsible for locating and parsing the underlying configuration format for the given config uri.
- loader protocol
- A loader may implement zero or more custom named protocols. An example would be the
wsgiprotocol which requires that a loader implement certain methods like
wsgi_app = get_wsgi_app(name=None, defaults=None). | https://docs.pylonsproject.org/projects/plaster/en/latest/glossary.html | 2018-11-13T02:27:31 | CC-MAIN-2018-47 | 1542039741192.34 | [] | docs.pylonsproject.org |
eb restore
Description
Rebuilds a terminated environment, creating a new environment with the same name, ID, and configuration. The environment name, domain name, and application version must be available for use in order for the rebuild to succeed.
Syntax
eb restore
eb restore
environment_id
Options
Output
The EB CLI displays a list of terminated environments that are available to restore.
Example
$
eb restoreSelect a terminated environment to restore # Name ID Application Version Date Terminated Ago 3 gamma e-s7mimej8e9 app-77e3-161213_211138 2016/12/14 20:32 PST 13 mins 2 beta e-sj28uu2wia app-77e3-161213_211125 2016/12/14 20:32 PST 13 mins 1 alpha e-gia8mphu6q app-77e3-161213_211109 2016/12/14 16:21 PST 4 hours (Commands: Quit, Restore, ▼ ▲) Selected environment alpha Application: scorekeep Description: Environment created from the EB CLI using "eb create" CNAME: alpha.h23tbtbm92.us-east-2.elasticbeanstalk.com Version: app-77e3-161213_211109 Platform: 64bit Amazon Linux 2016.03 v2.1.6 running Java 8 Terminated: 2016/12/14 16:21 PST Restore this environment? [y/n]: y INFO: restoreEnvironment is starting. INFO: Created security group named: sg-e2443f72 ... | http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-restore.html | 2017-05-23T01:05:36 | CC-MAIN-2017-22 | 1495463607245.69 | [] | docs.aws.amazon.com |
The Definitive Mailman Suite Development Setup Guide¶
Mailman 3 consists of a collection of separate-but-linked projects, each of which has its own development setup guide. This makes sense when you want to focus on a single piece of Mailman, but can make setting up the entire Mailman Suite in one go somewhat confusing. This guide attempts to move all the information currently in the wiki and various package documentation into a single “definitive” guide.
This document currently collates information from the following sources: 1. The Mailman Wiki:
- Main development guide:
- Hyperkitty development guide:
- Postorius “web ui in 5” guide:
- Main package documentation on Readthedocs.io:
- Mailman core start guide:
- Mailman core “web ui in 5” guide:
- Mailman core “archive in 5”
- Postorius dev guide:
- Hyperkitty dev guide:
Getting prerequisites¶
For the most part, setup for each project will download any needed packages. However, you will need a few system packages to be sure you’ve got the necessary version of Python and its tools, git (to get the source code), postfix (a mail server), and a few other tools that are used during setup.
On Fedora, you probably want to run:
$
If you prefer, you can substitute Exim4 for Postfix. Postfix is the MTA used by most Mailman developers, but we do support Exim 4. (Sendmail support is very much desired, but the Mailman core developers need contributors with Sendmail expertise to help.)
You will also want tox to run tests. You can get this using “pip install tox”.
HyperKitty also needs sassc. FIXME: add instructions on how to get sassc on a few platforms.
Set up a directory¶
Setting up the whole Mailman suite means you have to pull code from a bunch of different related repositories. You can put all the code anywhere you want, but you might want to set up a directory to keep all the pieces of mailman together. For example:
$ mkdir ~/mailman # cd ~/mailman
For the rest of this development guide, we are going to assume you’re using
~/mailman as your directory, but you can use whatever you want.
Set up virtual environments¶.)
Then, go into the mailman directory, run setup, and then run
mailman info
to be sure everything is set up correctly, and that the right settings are in
place:
$ cd mailman $ python setup.py develop $ mailman info
You can edit your
mailman.cfg file to make any necessary changes. Then
start things up:
$ mailman start $ cd ..
You may want to deactivate your virtualenv now, since you’ll be using a different one for other components:
$ deactivate
Later on, if you need to restart Mailman (i.e. if you get the error “Mailman
REST API not available. Please start Mailman core.”) then you can also do that
by calling the
mailman executable from the venv as follows:
$ ~/mailman/venv
Set up and run Postorius¶
The Postorius documentation, including a more extensive setup guide, can be found here:
Make sure to install mailmanclient and django-mailman3 before setting up Postorius. (If you’re following this guide in order, you’ve just done that.)
Get the code and run setup. Make sure you’re in the 2.7 venv for Postorius:
$ cd ~/mailman $ git clone $ source venv-2.7/bin/activate $ cd postorius $ python setup.py develop $ cd .. $ deactivate
Postorius and HyperKitty both come with
example_project directories with
basic configuration so you can try them out. For this tutorial, however,
we’ll be using a project that combines both instead.
Set up a mail server¶
To be able to actually receive emails, you need to setup a mail server. Mailman core receives emails over LMTP Protocol, which most of the modern MTAs support. However, setup instructions are provided only for Postfix, Exim4 and qmail. Please refer to the MTA documentation at Mailman Core for the details.
You will also have to add some settings to your django configuration. The setup instructions are provided in django’s email documentation.
Set up and run HyperKitty¶
Complete guide here:
Make sure to install mailmanclient and django-mailman3 before setting up Hyperkitty. (If you’re following this guide in order, you’ve just done that.)
Get the code and run setup:
$ cd ~/mailman $ git clone $ source venv-2.7/bin/activate $ cd hyperkitty $ python setup.py develop $ cd .. $ deactivate
Postorius and HyperKitty both come with
example_project directories with
basic configuration so you can try them out. By default, they both use port
8000, so if you do want to run both example projects at the same time, do
remember that you’ll need to specify a different port on the command line for
one of them.
However, we’re going to run them both in a single Django instance at the end of this guide, so don’t worry about ports right now.
Set up mailman-hyperkitty¶
mailman-hyperkitty is the package that actually sends the incoming emails
to HyperKitty for archiving. Note that this is one of the components that uses
Python 3.
Get it and set it up:
$ cd ~/mailman $ git clone $ source venv-3.5/bin/activate $ cd mailman-hyperkitty $ python setup.py develop $ cd .. $ deactivate
You’ll need to fix the default
mailman-hyperkitty.cfg file to use the
correct url for HyperKitty. If you’re running it on
then you need to change
base_url to match that.
Link Mailman to HyperKitty¶
Now you have to enable HyperKitty in Mailman. To do that, edit the
mailman.cfg` (in
~/mailman/mailman/var/etc, or wherever the output of
mailman info says it is)>
Run the Mailman Suite (combined hyperkitty+postorius)¶
You can run HyperKitty and Postorius as separate applications, but many developers are going to want to run them on a single server. The configuration files for this are in a repository called mailman-suite.
The first time you run the suite, you will want to set up a superuser account. This is the account you will use in the web interface to set up your first domains. Please enter an email address otherwise the database won’t be setup correctly and you will run into errors later:
$ cd ~/mailman $ git clone $ source venv-2.7/bin/activate $ cd mailman-suite/mailman-suite_project $ python manage.py migrate $ python manage.py createsuperuser
You’ll want to run the following commands in a window where you can leave them running, since it dumps all the django logs to the console:
$ python manage.py runserver
At this point, you should be able to see Mailman Suite running! In the default setup, you can go to and start poking around. You should be able to use the superuser account you created to log in and create a domain and then some lists.
The default config file uses a dummy email backend created by this line in settings.py:
Using this backend, all emails will be printed to the Postorius console (rather than sent as email) so you can get the url to verify your email from the console.
Don’t leave the console email backend configured and running once you get to the point where you want to send real emails, though! | http://docs.mailman3.org/en/latest/devsetup.html | 2017-05-23T01:10:20 | CC-MAIN-2017-22 | 1495463607245.69 | [] | docs.mailman3.org |
This step corresponds to test_agg_lib_4.
A-functions are methods that the AggreGate server can execute on the device. In our access control project, we will add an "open door" A-function which will be used for remote unlocking of the door. This A-function will have an argument — for how long the door will have to remain unlocked.
The A-function's argument will be of an overriding nature. When set to 0, it will have no effect and the door will be unlocked for the period of time specified by the UT A-variable. When not zero, the argument will override the UT A-variable once.
The steps
Notice that...
1. Aggregate.xtxt now contains the UL A-function.
2. Callback_agg_device_function() implements necessary code for executing the UL A-function.
3. Notice how Agg_record_decode() is used to extract the argument of the A-function.
4. Agg_record_encode() is used to return A-function value to the AggreGate server. The value is always "success" in our case.
The result
Single-click on the device in the AggreGate tree, and locate Unlock the Door in the Related Actions pane below. Click Unlock the Door, and you will get a dialog requesting Door Unlock Duration. Input a value, press OK. The function will be executed and return Success.
Of course, there is no code (yet) that actually unlocks the door. We will take care of this later.
Can't see Unlock the Door in the list? Navigate away from your test device, i.e. click on something else in the tree. Come back — and you will see the newly added A-function. | http://docs.tibbo.com/taiko/lib_agg_step_by_step_functions.htm | 2017-05-23T01:12:43 | CC-MAIN-2017-22 | 1495463607245.69 | [] | docs.tibbo.com |
.
gfxd list-missing-disk-stores -mcast-port=10334 Connecting to distributed system: mcast=/239.192.81.1:10334 1f811502-f126-4ce4-9839-9549335b734d [curwen.local:/Users/yozie/pivotal/gfxd/Pivotal_GemFireXD_13_bNNNNN_platform/server2/./datadictionary] | http://gemfirexd.docs.pivotal.io/docs/1.3.1/userguide/reference/gfxd_commands/gfxd-list-missing-disk-stores.html | 2017-05-23T01:09:04 | CC-MAIN-2017-22 | 1495463607245.69 | [] | gemfirexd.docs.pivotal.io |
Applies To: Windows Server 2016, Windows Server 2012 R2, Windows Server 2012
If your organization is currently running Windows 2000 2000 ().
Note
If you want to perform an in-place upgrade of an existing Windows 2000 AD DS domain controller to Windows Server 2008 , you must first upgrade the server to Windows Server 2003, and then upgrade it to Windows Server 2008 .
The following illustration shows the steps for deploying the Windows Server 2008 AD DS in a network environment that is currently running Windows 2000 Active Directory.
Note
If you want to set the domain or forest functional level to Windows Server 2008 , all domain controllers in your environment must run the Windows Server 2008 operating system.
Consolidating resource and account domains that are upgraded in place from a Windows 2000 environment as part of your Windows Server 2008 AD DS deployment may require interforest or intraforest domain restructuring. Restructuring AD DS domains between forests helps you reduce the complexity of your organization and the associated administrative costs. Restructuring AD DS domains within a forest helps you currently running Windows 2000 Active Directory, see Checklist: Deploying AD DS in a Windows 2000 Organization. | https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/plan/deploying-ad-ds-in-a-windows-2000-organization | 2017-05-23T01:28:12 | CC-MAIN-2017-22 | 1495463607245.69 | [array(['media/deploying-ad-ds-in-a-windows-2000-organization/ee51218a-a858-49d9-8b99-9986679191c1.gif',
'deploying in a windows 2000 org'], dtype=object) ] | docs.microsoft.com |
Warning
This is currently in Beta. DO NOT use in production.
A trigger object is something that can be called after a “change” on an object. It’s a bit like Zabbix trigger, and should be used only if you need it. In most cases, direct check is easier to setup :)
Here is an example that will raise a critical check if the CPU is too loaded:
Note
If your trigger is made to edit output add the trigger_broker_raise_enabled parameter into the service definition. If not, Shinken will generate 2 broks (1 before and 1 after the trigger). This can lead to bad data in broker module (Graphite)
define service{ use local-service ; Name of service template to use host_name localhost service_description Current Load trigger check_command check_local_load!5.0,4.0,3.0!10.0,6.0,4.0 trigger_name simple_cpu trigger_broker_raise_enabled 1 } | http://testdocshinken.readthedocs.io/en/latest/07_advanced/triggers.html | 2017-05-23T01:13:10 | CC-MAIN-2017-22 | 1495463607245.69 | [] | testdocshinken.readthedocs.io |
Plugin Manager Help Screens
This is a category page for topics related to the Plugin Manager Help Screens.
To appear on this page each topic page should have the following code inserted at the end:
[[Category:Plugin Manager Help Screens]]
Pages in category "Plugin Manager Help Screens"
The following 6 pages are in this category, out of 6 total.
Advertisement | https://docs.joomla.org/Category:Plugin_Manager_Help_Screens | 2017-05-23T01:17:06 | CC-MAIN-2017-22 | 1495463607245.69 | [] | docs.joomla.org |
Changelog¶
The release versions that are sent to the Python package index are also tagged in Github. You can see the tags through the Github web application and download the tarball of the version you’d like. Additionally PyPI will host the various releases of confire.
The versioning uses a three part version system, “a.b.c” - “a” represents a major release that may not be backwards compatible. “b” is incremented on minor releases that may contain extra features, but are backwards compatible. “c” releases are bugfixes or other micro changes that developers should feel free to immediately update to.
Contributors¶
I’d like to personally thank the following people for contributing to confire and making it a success!
Versions¶
The following lists the various versions of confire and important details about them.
v0.2.0¶
- tag: v0.2.0
- deployment: July 31, 2014
- commit: (latest)
This release added some new features including support for environmental variables as settings defaults, ConfigurationMissing Warnings and ImproperlyConfigured errors that you can raise in your own code to warn developers about the state of configuration.
This release also greatly increased the amount of available documentation for Confire.
v0.1.1¶
- tag: v0.1.1
- deployment: July 24, 2014
- commit: bdc0488
Added Python 3.3 support thanks to @tyrannosaurus who contributed to the changes that would ensure this support for the future. I also added Python 3.3 travis testing and some other minor changes. | http://confire.readthedocs.io/en/latest/changelog.html | 2017-05-23T01:08:22 | CC-MAIN-2017-22 | 1495463607245.69 | [] | confire.readthedocs.io |
Mark C. Chien, M.D.
Dr. Chien is a board-certified doctor specializing in internal medicine. He completed his medical education and residency at the University of Illinois.
Welcome to the Internal Medicine and Gastroenterology practices of Suite 118, 680 N. Lake Shore Drive. We appreciate your interest in our office. All of us are board certified internists on the faculty at the Northwestern University Feinberg School of Medicine, and are proud of our reputations for providing high-quality medical care. Each of us practices separately, but we provide coverage for one another at certain times. You may wish to bookmark this website for future reference. No medical advice is provided on this site.
Dr. Chien is a board-certified doctor specializing in internal medicine. He completed his medical education and residency at the University of Illinois.
Dr. DeBacker is experienced in the private practice of General Internal Medicine in a university setting, where he helps guide patients through the sometimes perplexing maze of the 21st century medical system.
Dr. Dillon practices general internal medicine with an emphasis on preventive medicine, treatment of cardiovascular disease, hypertension, high cholesterol, diabetes, asthma, allergies, cancer screening and osteoporosis.
Dr. Repasy is a board-certified doctor specializing in internal medicine. He is also the Clinical Assistant Professor of Medicine, at the Feinberg School of Medicine at Northwestern University.
Dr. Ruchim’s practice involves all aspects of gastroenterology. This includes disorders of the esophagus, stomach, liver, pancreas, gallbladder and colon. His office hours on Mondays and Wednesdays are available by telephoning our office reception staff.
Dr. Sipkins is a board-certified doctor specializing in Internal Medicine. He is a Clinical Assistant Professor of Medicine at Northwestern’s Feinberg School of Medicine. | http://680docs.com/ | 2014-08-20T06:48:20 | CC-MAIN-2014-35 | 1408500800767.23 | [] | 680docs.com |
Database Info:
The 'Database Structure' can be found by going to the main page, category 'System Functions' click in 'dbcheck'. This can be useful to check if it's all ok with the tables or attributes, but of course you can't modify them.
*Under Construction*
Related pages
- System functions
- Setup Setup phplist
- eventlog View the eventlog
CategoryDocumentation | http://docs.phplist.com/DatabaseCheckInfo.html | 2014-08-20T06:49:18 | CC-MAIN-2014-35 | 1408500800767.23 | [] | docs.phplist.com |
The Arbiter object is a way to define Arbiter daemons that will manage the configuration and all different architecture components of shinken (like distributed monitoring and high availability). It reads the configuration, cuts it into parts (N schedulers = N parts), and then sends them to all others elements. It manages the high availability part : if an element dies, it re-routes the configuration managed by this falling element to a spare one. Its other role is to receive input from users (like external commands of shinken.cmd) and send them to other elements. There can be only one active arbiter in the architecture.
The Arbiter definition is optional. If no arbiter is defined, Shinken will “create” one for the user. There will be no high availability for the Arbiter (no spare), and it will use the default port on the server where the daemon is launched.
Variables in red are required, while those in black are optional. However, you need to supply at least one optional variable in each definition for it to be of much use.
define arbiter{ arbiter_name Main-arbiter address node1.mydomain host_name node1 port 7770 spare 0 modules module1,module2 } | http://testdocshinken.readthedocs.io/en/latest/08_configobjects/arbiter.html | 2017-11-18T00:55:00 | CC-MAIN-2017-47 | 1510934804125.49 | [] | testdocshinken.readthedocs.io |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.