content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
20210628 - Upgrade of php fixed the page rendering issue. Book Creator Add this page to your book Add this page to your book Book Creator Remove this page from your book Remove this page from your book This is an old revision of the document! Looking good. I've never understood why some people are wary of, even scared of, the CLI. — Brian Lawrence 2012/09/23 13:42 Thanks. Neither have I! Perhaps, it's got something to do with the fear of the unknown:) — Marcin Herda 2012/09/23 13:59 —- Please suggest any other topics that you feel should be included here (without going into too much details - this is an introduction after all) — Marcin Herda 2012/09/23 14:29
https://docs.slackware.com/talk:howtos:cli_manual:introduction?rev=1348435805&mddo=print
2021-10-16T01:38:23
CC-MAIN-2021-43
1634323583408.93
[]
docs.slackware.com
Table of Contents Product Index Perfect your night out with Juna High Heels and Clutch for Genesis 8 Female(s). These fun High Heels and Clutch come in a stylish crocodile look with a variety of beautiful pastel colors and a special luxury version. Juna Heels will fit most Genesis 8 Female Characters. As a bonus, you get 3 different poses to hold the clutch and the option to switch the bow of the heels on and off. Don't leave the house without your favorite purse and shoes - get Juna High Heels and Clutch!.
http://docs.daz3d.com/doku.php/public/read_me/index/70663/start
2021-10-16T02:36:49
CC-MAIN-2021-43
1634323583408.93
[]
docs.daz3d.com
Creating ToDos To create a ToDo Load a model in Trimble Connect for Browser's 3D Viewer. When you have found an issue, click the Add ToDo button in the toolbar. The New ToDo panel opens. Enter the required information: Title and Description. Add the optional information, such as Priority, Type, Status and Due Date. Assign the ToDo to a user or to a user group. Click Save.
https://docs.3d.connect.trimble.com/todos/creating-todos
2021-10-16T02:37:26
CC-MAIN-2021-43
1634323583408.93
[]
docs.3d.connect.trimble.com
What is a YAML file? YAML file consists of a language YAML (YAML Ain’t Markup Language) which is a Unicode based data-serialization language; used for configuration files, internet messaging, object persistence, etc. YAML uses the .yaml extension for its files. Its syntax is independent of a specific programming language. Basically, the YAML is designed for human interaction and to work well with modern programming languages. Support for serializing arbitrary native data structures increased the readability of the YAML files, but it has made the parsing and file generation process complicated a little. Brief History YAML was first proposed in 2001 and was developed by Clark Evans, Ingy döt Net, and Oren Ben-Kiki. YAML was first said to mean “Yet Another Markup Language” to indicate its purpose as a markup language. It was later repurposed as “YAML Aint Markup Language” to indicate its purpose as data-oriented. YAML File Format YAML file consists of the following data types - Scalars: Scalars are values like Strings, Integers, Booleans, etc. - Sequences: Sequences are lists with each item starting with a hyphen (-). Lists can also be nested. - Mappings: Mapping gives the ability to list keys with values. Syntax Whitespace: Whitespace indentation is used to indicate nesting and overall structure. name: John Smith contact: home: 1012355532 office: 5002586256 address: street: | 123 Tornado Alley Suite 16 city: East Centerville state: KS # This is a YAML Comment Lists: Hyphen (-) is used to indicate list members with each member on a separate line. List members can also be enclosed in square brackets ([…]) with members separated by commas (,). - A - B - C [A, B, C] Associative Array: An associative array is surrounded by curly brackets ({…}). The keys and values are separated by colon(:) and each pair is separated by comma (,). {name: John Smith, age: 20} Strings: String can be written with or without double-quotes (") or single-quotes ('). Sample String "Sample String" 'Sample String' Scalar Block content: Scalar content can be written in block notation by using the following: - |: All live breaks are significant. - >: Each line break is folded to space. It removes the leading whitespace for each line. data: | YAML (YAML Ain't Markup Language) is a data-serialization language data: ? YAML (YAML Ain't Markup Language) is a data-serialization language Multiple Documents: Multiple documents are separated by three hyphens (—) in a single stream. Hyphens indicate the start of the document. Hyphens are also used to separate directives from document content. The end of the document is indicated by three dots (…). --- Document 1 --- Document 2 ... Type: To specify the type of value, double exclamation marks (!!) are used. a: !!float 123 b: !!str 123 Tag: To assign a tag to a note, an ampersand (&) is used and to reference that node, an asterisk (*) is used. name: John Smith bill-to: &id01 street: | 123 Tornado Alley Suite 16 city: East Centerville state: KS ship-to: *id01 Directives: YAML documents can be preceded by directives in a stream. Directives begin with a percent sign (%) followed by the name and then the parameters separated by spaces. %YAML 1.2 --- Document content YAML file example Here you can see a docker yaml file example below: topology: database_node_name: docker_controller docker_controller_node_name: docker_controller self_service_portal_node_name: docker_controller kvm_compute_node_names: kvm_compute1 docker_compute_node_names: docker_compute1 YAML vs JSON Basically, both JSON and YAML are developed to provide a human-readable data interchange format. The YAML is realized as a superset of JSON format. It means that we can parse JSON using a YAML parser. Although the practical implementation of this theory is little tricky. Therefore, some basic differences between YAML and JSON are given below:
https://docs.fileformat.com/programming/yaml/
2021-10-16T02:31:22
CC-MAIN-2021-43
1634323583408.93
[]
docs.fileformat.com
Date: Fri, 14 Apr 95 11:58:52 PST From: "Wayne Hernandez" <[email protected]> To: [email protected] Subject: 2.1 GB SCSI DIsk won't newfs Message-ID: <[email protected]> Next in thread | Raw E-Mail | Index | Archive | Help I'm trying to install 2.0-950322-SNAP (will probably change to the April before the weekend is thru). I can assign the partitions to the drive, but when I tell it to proceed, the system just locks up. I have my system configured as follows: 2.1 GB Seagate "Barracuda" external SCSI ID=2 SONY SDT-4000 DAT Tape Drive SCSI ID=3 w/terminator Maxtor 540MB IDE Drive (300 MB for Dos, rest is FreeBSD) 386DX-40 MB 8 MB Ram SoundBlaster SCSI/Soundcard (recognizes the tapedrive/Harddrive) 3COM 3C503 TP/AUI Port 300, IRQ 3, IOMEM 0xdc000 Intel Ethernet Express PRO TP/BNC (can't get configured yet) Port 320, IRQ 10, IOMEM 0x0, IOSIZE 32768 Is there something I left out? I had no problems installing to the IDE disk, but don't have enough room to load source for compiling a kernel to support the Intel card. Wayne Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=90983+0+/usr/local/www/mailindex/archive/1995/freebsd-questions/19950416.freebsd-questions
2021-10-16T03:04:15
CC-MAIN-2021-43
1634323583408.93
[]
docs.freebsd.org
Routing. Warning This package has been deprecated.
https://docs.netgate.com/pfsense/en/latest/packages/routed.html
2021-10-16T02:48:53
CC-MAIN-2021-43
1634323583408.93
[]
docs.netgate.com
Encryption in Transit This page discusses how to secure communications between clients and Stardog. Page Contents - Steps to setup SSL - 1. Create or acquire a certificate - 2. Configure the Stardog server - 3. Configure the Stardog client - 4. Enable SSL on server startup - 5. Test Stardog client and server connection All network traffic between clients and Stardog can be performed over either HTTP or HTTPS protocols. To ensure the confidentiality of user authentication credentials when using remote connections, it is highly recommended to configure the Stardog server to only accept connections that are encrypted with SSL. Steps to setup SSL 1. Create or acquire a certificate - If you already have a .crtfile and private key, continue to step 2. - If you do not have a cerificate, you may execute the following OpenSSL command to generate a self-signed certificate ( myCert.crt) and a 4096-bit private key ( key.pem). openssl req -x509 -newkey rsa:4096 -keyout key.pem -out myCert.crt -days 365 -nodes Answer the certificate signing request (CSR) information prompt to generate the private key and self-signed certification. The -x509 option tells the req utility to create a self-signed cerificate. The -days 365 option specifies that the certificate will be valid for 365 days. There are many other ways to generate a self-signed certificate. The above command is just an example. 2. Configure the Stardog server The Stardog server ultimately needs a Keystore to enable SSL. Continuing with the example command in the above step, we combine our private key ( key.pem) and certificate ( myCert.crt) into a PCKS12 file to then be imported into the a Keystore using Java’s keytool utility. Bundle the certificate and private key together using the following OpenSSL command to take a private key ( key.pem) and a certificate ( myCert.crt) and combine them into a PKCS12 file. openssl pkcs12 \ -export \ -in myCert.crt \ -inkey key.pem \ -out myPkcs.p12 Import the .p12file into a Keystore ( my-keystore.jks) using Java’s keytool utility. keytool -importkeystore -destkeystore my-keystore.jks -srckeystore myPkcs.p12 -srcstoretype PKCS12 Set the following server settings in the stardog.propertiesfile to specify Keystore information. # location of the keystore javax.net.ssl.keyStore=/path/to/my-keystore.jks # the keystore password javax.net.ssl.keyStorePassword=changeit If your Keystore type is not jks, you should specify the following additional setting in your stardog.properties. # substitute in whichever Keystore type you have javax.net.ssl.keyStoreType=pkcs12 All the above properties are checked first in stardog.properties; then in JVM args passed in from the command line, e.g. -Djavax.net.ssl.keyStorePassword=mypwd. export STARDOG_SERVER_JAVA_ARGS="-Djavax.net.ssl.keyStore=/path/to/my-keystore.jks -Djavax.net.ssl.keyStorePassword=changeit" If you’re creating a Server programmatically with Java via ServerBuilder, you can specify values for these properties using the appropriate ServerOptionswhen creating the server. These values will override anything specified in stardog.propertiesor via normal JVM args. 3. Configure the Stardog client The Stardog client uses standard Java security components to access a store of trusted certificates. By default, it trusts a list of certificates installed with the Java runtime environment, but it can be configured to use a custom trust store. The Stardog client uses an X509TrustManager. Generate a separate Truststore that imports only the certificate using Java’s keytool. keytool -import -file myCert.crt -keystore my-truststore.jks The above invocation of the keytoolutility creates a new trust store named my-truststore.jksand initializes it with the certificate in myCert.crt. The tool will prompt for a passphrase to associate with the trust store. This is not used to encrypt its contents, but can be used to ensure its integrity. Set the STARDOG_JAVA_ARGSenvironment variable to set the Truststore up in the Stardog CLI environment export STARDOG_JAVA_ARGS="-Djavax.net.ssl.trustStore=/path/to/my-truststore.jks -Djavax.net.ssl.trustStorePassword=changeit" When connecting to the Stardog server with the Stardog and Stardog Admin CLI client, it is assumed you are using the default server. You may specify a different default server by adding the following JVM argument to the STARDOG_JAVA_ARGSenvironment variable. # replace with whatever server url you want export STARDOG_JAVA_ARGS=-Dstardog.default.cli.server= Java Applications For custom Java applications that use the Stardog client, the system properties javax.net.ssl.trustStore and javax.net.ssl.trustStorePassword can be set programmatically or when the JVM is initialized. 4. Enable SSL on server startup You must explicitly tell Stardog to startup with SSL. You have 2 options when starting up the server with ssl enabled. Have the server accept connections over HTTP and HTTPS or to have the server only accept connections over HTTPS. Optionally support SSL connections To enable Stardog to optionally support SSL connections, pass --enable-ssl to the server start command. stardog-admin server start --enable-ssl By default, the HTTP server will be accessible via port 5820, and the HTTPS server will be accessible via port 5821. If you need to modify the ports Stardog’s HTTP and HTTPS server use, pass in new port numbers to the --port and -ssl-port options in the [server start] command. stardog-admin server start --enable-ssl --port 8081 --ssl-port 8082 Require the server to use SSL only If you want to require the server to use SSL only, that is, to reject any non-SSL connections, then pass --require-ssl to the server start command. stardog-admin server start --require-ssl By default, the HTTPS server will be accessible via port 5820. If you need to modify this port, specify a new port in --port option. stardog-admin server start --require-ssl --port 8080 Configuring SSL when Stardog is controlled with systemctl When the Stardog service is managed by systemd, it is required to modify the <stardog-installation-directory>/stardog-server.sh script in order to pass in the --enable-ssl or --require-ssl flags to the server start command. For example: ${STARDOG_BIN}/stardog-admin server start --daemon --require-ssl --home ${STARDOG_HOME} --port ${PORT} "${@}" 5. Test Stardog client and server connection Stardog’s HTTP client supports SSL when the https: scheme is used in the database connection string or in server url. For example, the following Stardog command will initiate an SSL connection: stardog --server server status. Refer to Step 3 to ensure that the Trustore was created with the correct certificate. The Stardog client uses standard Java security components to access a store of trusted certificates.Cert.crt file in the keytool invocation. A client may also fail to authenticate to the server if the hostname in the Stardog database connection string does not match a name contained in the server certificate..
https://docs.stardog.com/operating-stardog/security/encryption-in-transit
2021-10-16T03:27:26
CC-MAIN-2021-43
1634323583408.93
[]
docs.stardog.com
comment¶ This ViewHelper prevents rendering of any content inside the tag Note: Contents of the comment will still be parsed thus throwing an Exception if it contains syntax errors. You can put child nodes in CDATA tags to avoid this. = Examples = <code title=”Commenting out fluid code”> Before <f:comment> This is completely hidden. <f:debug>This does not get rendered</f:debug> </f:comment> After </code> <output> Before After </output> <code title=”Prevent parsing”> <f:comment><![CDATA[ <f:some.invalid.syntax /> ]]></f:comment> </code> <output> </output> Note: Using this view helper won’t have a notable effect on performance, especially once the template is parsed. However it can lead to reduced readability. You can use layouts and partials to split a large template into smaller parts. Using self-descriptive names for the partials can make comments redundant.
https://docs.typo3.org/other/typo3/view-helper-reference/9.5/en-us/typo3fluid/fluid/latest/Comment.html
2021-10-16T02:11:36
CC-MAIN-2021-43
1634323583408.93
[]
docs.typo3.org
Before deploying RabbitMQ, you must meet all deployment prerequisites. Additional Prerequisites To provide access to all RabbitMQ instances, you must configure the RabbitMQ load balancer. DNS – DNS A and PTR records must exist for each RabbitMQ instance and the load balancer to enable forward and reverse lookup of IP addresses and hostnames.
https://docs.vmware.com/en/VMware-Cloud-Provider-Lifecycle-Manager/1.1/VMware-Cloud-Provider-Lifecycle-Manager-11-Deployment-And-Administration/GUID-08C7184A-DD9C-4F8B-8794-E2B1F67EBC8D.html
2021-10-16T01:43:54
CC-MAIN-2021-43
1634323583408.93
[]
docs.vmware.com
VMware Cloud on AWS GovCloud maintains an inventory of workload virtual machines in your SDDC. VMs are listed by name and number of tags. Procedure - Log in to the VMware Cloud on AWS GovCloud at. - On the Networking & Security tab, click .If a virtual machine has any tags, the number of tags is shown in the Tags column. Click the number to view the tags. To add or remove VM tags, click the vertical ellipsis at the beginning of the VM row and select Edit to display the tag editor. Click the icon to add more tags. See Add Tags to an Object in the NSX-T Data Center Administration Guide for more information about tagging NSX-T objects.
https://docs.vmware.com/en/VMware-Cloud-on-AWS-GovCloud-(US)/2021/vmc-govcloud-networking-security/GUID-B4C32660-114C-4DA8-8137-99D118AEFC0F.html
2021-10-16T03:59:13
CC-MAIN-2021-43
1634323583408.93
[]
docs.vmware.com
For more detailed information read Complaints. Initial status Complainant can submit claim, upload documents, cancel claim, and re-submit it. Procuring entity can upload documents and answer to claim. Complainant can cancel claim. Complainant can cancel claim, upload documents, agree or disagree with decision. Reviewer can upload documents and review complaint. Terminal status Claim recognized as invalid. Claim recognized as declined. Claim recognized as resolved. Claim cancelled by complainant. Claim ignored by procuring entity.
https://openprocurementtenderbelowthreshold.readthedocs.io/en/stable/complaints.html
2021-10-16T03:32:53
CC-MAIN-2021-43
1634323583408.93
[]
openprocurementtenderbelowthreshold.readthedocs.io
Unit Settings Project units are stored in a universal format - which allows project users to define how they would like to see the units displayed. Unit settings can be changed in the Trimble Connect applications. Trimble Connect has expanded unit settings for the project system and allows users to specify display precision on each unit e.g. Length, Area, Volume Angle and Measurements Length. Set the project units Open your project in Trimble Connect for Browser's 3D Viewer.. Open the Menu. Under Settings, go to the Units section. Select the Unit system: Imperial, Metric, or Custom. Select the unit of measure and display precision for distance, area, volume, weight and angle. When you are finished, close the menu.
https://docs.3d.connect.trimble.com/application-settings/unit-settings
2021-10-16T03:35:53
CC-MAIN-2021-43
1634323583408.93
[]
docs.3d.connect.trimble.com
Manage your Databricks account (legacy) There are some administrative tasks that only the Databricks account owner can perform. The account owner is typically the user who created the Databricks account. This section covers the following tasks: - Access the account console (legacy) - Manage your subscription (legacy) - Configure your AWS account (cross-account IAM role) - Configure AWS storage (Legacy) - Configure audit logging - View billable usage (legacy) - Deliver and access billable usage logs - Download billable usage logs using the Account API - Analyze billable usage log data - Monitor usage using cluster and pool tags - Databricks workload types: feature comparison The account management processes included in this section are focused on non-E2 accounts (including free trial and credit-card billed accounts). If your account is on the E2 version of the platform, see Manage your Databricks account (E2).
https://docs.databricks.com/administration-guide/account-settings/index.html
2021-10-16T03:59:52
CC-MAIN-2021-43
1634323583408.93
[]
docs.databricks.com
Installation Requirements Fast Events is integrated with Mollie as payment provider, providing a variety of payment options. If online payments are used, the plugin currently only works for companies, associations, foundations … located in a SEPA country. With Mollie there are no fixed recurring costs, you only pay for successful transactions. Press the button below to create your free Mollie account. There is no region restriction for free tickets or RSVP events. Note Fast Events has been tested on WordPress 5.6 and later together with PHP 7.4. Older versions of WordPress and PHP are not supported! Make sure the PHP extensions gd, imagick and opcache are enabled. With most hosting providers this can be done via DirectAdmin or cPanel. Fast Events. Should conflicts occur with third-party software, we may provide support at our discretion. For performance reasons Fast Events is using its own tables and not the WordPress custom post types approach. It requires the presence of the InnoDB storage engine in your database engine. Installation There are 3 different ways to install Fast Events, as with any other registered WordPress plugin. Using the WordPress Dashboard Navigate to the Add New in the plugin dashboard Fast Events Click Install Now Activate the plugin on the Plugin Dashboard Uploading in WordPress Dashboard Download the latest version of this plugin from Navigate to the Add New in the plugins dashboard Navigate to the Upload area Select the zip file (from step 1.) from your computer Click Install Now Activate the plugin in the Plugin dashboard Using FTP Download the latest version of this plugin from Unzip the zip file, which will extract the fast-events directory to your computer Upload the fast-events directory to the /wp-content/plugins/directory in your web server Activate the plugin in the Plugin dashboard Demo.
https://docs.fast-events.eu/en/latest/getting-started/installation.html
2021-10-16T02:40:35
CC-MAIN-2021-43
1634323583408.93
[array(['../_images/Mollie.png', 'Mollie'], dtype=object)]
docs.fast-events.eu
Regional Market Availability¶ The tables below represent the current availability by regional market. If the desired regional market is not listed, refer to the Microsoft Regions availability or submit a support ticket directly to Microsoft Azure. * Australia is a Microsoft Managed Country for sales through all customer purchase scenarios except the Enterprise Agreement customer purchase scenario.
https://docs.netgate.com/tnsr/en/latest/platforms/azure/availability.html
2021-10-16T02:44:45
CC-MAIN-2021-43
1634323583408.93
[]
docs.netgate.com
The welcome notification sequence allows you to greet your new subscriber to your Shopify store. You can set up a sequence of notifications to build a personal connection with these subscribers and keep them on your engaged for a longer time. Welcome Notification Sequence is part of the Basic plan (free plan) and is enabled by default. Here's how you can enable welcome notification sequences through the PushOwl dashboard: 1. Select 'Automation' on your Dashboard and click on 'Welcome Notifications'. 2. Click on the switch next to Welcome Notifications to enable it. 3. Switch on specific reminders in the sequence. 4. You can also choose when you want to send them out. 5. Click on the pencil icon next to the notification to customize the message. 6. Here, you can update the title, message, link, and button. Merchants on the Business and Enterprise plan can add a hero image to their welcome notification. Once customized, you can click on "Save". 7. You can also enable a sequence of notifications. On the welcome notifications page, scroll down to find two other inactive notifications in the sequence. You can click on the switch to enable it, set the time after which it should be sent, and customize the message. Your welcome notification sequence is now set up!
https://docs.pushowl.com/en/articles/2320421-welcome-notification-sequence
2021-10-16T01:39:44
CC-MAIN-2021-43
1634323583408.93
[array(['https://downloads.intercomcdn.com/i/o/75746111/bd6711dc54d79ba73e24ea7b/2.9.1.jpg?expires=1620300359&signature=49361f8b9e88328fdaa09c72c27414095ea3b4e888942b0e03efbd8dc0d05545', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/280952124/b45539491ceb3b2e167b8a77/Group+1.png?expires=1620300359&signature=f719228cfa7f467df0074aaa5e27ef36674ea656dfe5affb495c882b12bc1a5f', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/280952465/eece3967cd648a881d11760f/Group+2+%281%29.png?expires=1620300359&signature=991d9e983849886289cc8cc944a43529d6ff54bcb354f9c33fa4ed8d3035c0e4', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/280952251/050802feeaedea8cdc9c91a7/Group+3.png?expires=1620300359&signature=f552db6da8895ff480397029e687f04645df2b1892110aaf8753acdaf7996f23', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/280952514/54740ef7483bc0ecf05741f2/Group+4.png?expires=1620300359&signature=b046a52c3c7ae0967e7ecd73e3e792ddb2bba32437909869b52d64bd2dad0daa', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/280952817/34c74d6d494afc99bc84ab4e/Group+5.png?expires=1620300359&signature=91e587ea3da716bf9177816333f714c33b991adce57854028536ddf45d68ce91', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/333502831/53328235b2a679ef3a0befb0/Screenshot+2021-05-06+at+4.31.02+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/333500323/e0ae500016c01c6bba7129c8/Screenshot+2021-05-06+at+4.16.55+PM.png?expires=1620300359&signature=4fc4c153760defd172d9b2f97ee91932c0d5e1454b7cefd37fb133b96a6e849b', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/280952914/32719966cbd4ea1a7771f114/Group+6.png?expires=1620300359&signature=99f854ca001af26682f990759ab88b8bec140fd014bcb552be0a398ac6e18ca4', None], dtype=object) ]
docs.pushowl.com
Getting started¶ Use the following information to learn how to authenticate, send API requests, and complete basic operations by using the Rackspace Cloud Load Balancers API..
https://docs.rackspace.com/docs/cloud-load-balancers/v1/getting-started/
2021-10-16T02:29:04
CC-MAIN-2021-43
1634323583408.93
[]
docs.rackspace.com
torque at the rigidbody's centre of mass. A torque is conceptually a force being applied at the end of an imaginary lever, with the fulcrum at the centre of mass. A torque of five units could thus be equivalent to a force of five units pushing on the end of a lever one unit long, or a force of one unit on a lever five units long. Unity's units are arbitrary but the principle that torque = force x lever length still applies. Note that unlike a 3D Rigidbody, a Rigidbody2D can only rotate in one axis and so torque is a float value rather than a vector. See Also: AddForce, AddForceAtPosition, Rigidbody.AddTorque. Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2017.3/Documentation/ScriptReference/Rigidbody2D.AddTorque.html
2021-10-16T03:23:06
CC-MAIN-2021-43
1634323583408.93
[]
docs.unity3d.com
NetApp SolidFire Enterprise SDS (eSDS) provides the benefits of SolidFire scale out technology and NetApp Element software data services on the hardware of your choice that meets the reference configuration for SolidFire eSDS. SolidFire eSDS delivers NetApp Element software independent of the underlying hardware. This enables you to use all the Element functionality either on a NetApp-branded appliance or on a general-purpose server, which complies with the NetApp reference configuration. Here are the key features of SolidFire eSDS:
http://docs.netapp.com/sfe-122/topic/com.netapp.doc.sfe-sds-ig/GUID-F1BDD19F-AF33-4CDE-B67F-C5E17D4E6DE9.html
2021-10-16T03:28:51
CC-MAIN-2021-43
1634323583408.93
[]
docs.netapp.com
Branding Overview The branding feature contains the majority of content and styling that your customers will see as they use your platform. The key areas of content and styling that can be configured are: Logo and browser favorite icon Default package for new customers Theme and primary highlight color Onboarding content Joining the platform landing page Legal agreement Registration emails Welcome page Notification emails Getting started To modify branding options choose ‘Branding’ from the side navigation. A page will display with many branding options. Branding options include: System name - This is the name emails will appear to come from, typically this would be your organization or department name System email address - The email address that notifications will appear to come from and replies will be sent to Join content - When a user is invited to be part of your portfolio they first sign on to the platform on the “Join” page, this content will be displayed Welcome content - Each time the user signs in to the portal they will be shown this welcome content first. It is a good place to provide instructions, links and relevant updates Welcome to our lending platform, below is a list of documentation we require in order to assess your loan request quickly. Please proceed through the steps on the left side menu. If you need to contact us please use the Secure Mail so that our communications are kept secure and confidential. Using cloud based accounting like Quickbooks Online or Xero? Be sure to select the "Connect Applications" step to automate much of your work, if you are using desktop software of spreadsheets you can upload reports on the "Upload Files" step instead. If you have questions about how to use our platform please see our FAQ at Thank you and we look forward to helping you meet your financing goals. Terms and Conditions content - Shown on the “Join” page, These conditions need to be agreed to when a user joins your portfolio Invite email - Sent when initially asking a customer to sign in and join your platform Welcome to XXXXX you're invited to our new lending platform where you can share your financial data securely … in minutes It's a one-time set up for the duration of your loan. Save Time No more preparing financial documents each month as part of your loan reporting. Your information will be shared seamlessly with us and you’ll be prompted for additional questions. Control Over Your Data Once connected, your information is encrypted and only accessible by authorised loan and credit officers. You can revoke this connection any time. If you have any login or data questions please check our FAQ at Thank you, The XXXXX Team Data Reminder email - Sent when data is required or overdue Registration email - Sent when the user completes the form on the “Join” page, it contains a confirmation link to verify their email address Registration complete email - Sent when the user completes verifying their email and can now sign in to the platform Application received email - (optional) - When an application process has been configured for your account this is the content that will be sent to the user when they submit an application Application processed email - (optional) - When an application process has been configured for your account this is the content that will be sent when you move their application to a processed status Password reset email - Sent when a user requests a new password. You may wish to also include support contact details Logo - Your company logo, note that the logo is a PNG format file and should be resized to 308px x 100px specifically Favicon - The ICO format file to be used by the users browser for a bookmark and tab icon Default Package - The package an invited user will be assigned when they use the “Join” page to onboard Primary color - The color that will be used in certain headers and graphical elements, typically this would be set to a corporate color that contrasts well with white Theme - A base set of colors and visual style for your platform Custom CSS - This is an advanced option for web designers who are familiar with Cascading Style Sheets (CSS) that want to control the visual style more tightly and override fonts and colors After modifying any of these values click ‘Save’ to have the changes take immediate effect.
https://docs.bossinsights.com/data-platform/Branding.309264425.html
2021-10-16T01:42:16
CC-MAIN-2021-43
1634323583408.93
[]
docs.bossinsights.com
Method GtkWidgetcompute_expand Declaration [src] gboolean gtk_widget_compute_expand ( GtkWidget* widget, GtkOrientation orientation ) Description [src] Computes whether a container should give this widget extra space when possible. Containers should check this, rather than looking at gtk_widget_get_hexpand() or gtk_widget.
https://docs.gtk.org/gtk4/method.Widget.compute_expand.html
2021-10-16T03:41:27
CC-MAIN-2021-43
1634323583408.93
[]
docs.gtk.org
Reduce Functions A reduce function aggregates the operational status for the BS. The alarm severity from the edges are used as input for the reduce function. For this operation the following reduce functions are available: Table 1. Status calculation Reduce Functions Name Description Highest Severity Uses the value of the highest severity, weight is ignored. Threshold Uses the highest severity found more often than the given threshold, e.g., 0.26 can also be seen as 26%, which means at least 2 of 4 alarms need to be raised to change the BS.. For the aggregation the following formula will be used as then the configured Threshold is used. In this case the Operational Status is set to Warning because the first threshold which exceeds 33% is Warning with 80%. Map Functions Business Service Daemon
https://docs.opennms.com/horizon/28.0.1/operation/bsm/reduce-functions.html
2021-10-16T03:48:21
CC-MAIN-2021-43
1634323583408.93
[]
docs.opennms.com
Outdated TYPO3 Version This documentation refers to an outdated TYPO3 version - either select a supported version or make sure to use a TYPO3 Extended Long Term Support (ELTS) version to continue getting security updates. More information about ELTS Comment A “resource” (see above) plus imgResource properties (see the example and the object reference for imgResource below). Filetypes can be anything among the allowed types defined in the configuration variable $TYPO3_CONF_VARS[‘GFX’][‘imagefile_ext’]. Standard is pdf, gif, jpg, jpeg, tif, bmp, ai, pcx, tga, png. A GIFBUILDER object. See the object reference for GIFBUILDER below.
https://docs.typo3.org/m/typo3/reference-typoscript/8.7/en-us/DataTypes/Imgresource/Index.html
2021-10-16T03:30:58
CC-MAIN-2021-43
1634323583408.93
[]
docs.typo3.org
Energy UK statement on the 2030 EU Framework On the statement on 2030 EU Framework, Energy UK said: “Focussing on the greenhouse gas target is a good move and will help drive investment in low-carbon technologies such as energy efficiency, new nuclear and renewables. The UK energy industry is up for the challenge of playing its part in bringing down the EU’s carbon emissions to forty per cent of the level in 1990. Flexibility, allowing member states to choose their own way to do their bit, will help keep costs for consumers down.”
https://docs.energy-uk.org.uk/media-and-campaigns/press-releases/20-2014/4884-energy-uk-statement-on-statement-on-the-2030-eu-framework.html
2019-04-18T16:16:45
CC-MAIN-2019-18
1555578517745.15
[]
docs.energy-uk.org.uk
). Prerequisites. Note Take note of the location of the shared drive in the Cluster Administrator before you run SQL Server Setup because you need this information to create a new failover cluster. To create a new SQL Server 2005 failover cluster Insert the Microsoft Finish. Continue. Note If the scan reports any errors, identify the error and click Exit. You are not able to complete the installation until blocking errors are removed. For more information, see Check Parameters for the System Configuration Checker. virtual SQL Server name must be unique on your network, and must have a name that is different than the host cluster and cluster nodes. To proceed, click Next. Important You cannot name an instance DEFAULT or MSSQLSERVER. Names must follow rules for SQL Server identifiers. For more information about instance naming rules, see Instance Name. Note The Virtual SQL Server Name page only appears If Setup detects that you are running MSCS. If the Virtual SQL Server Name page does not appear, cancel Setup and configure MSCS. For more help on this page, see Virtual Server enter an IP address, and then enter the IP address. Click Add. The IP address and the subnet are displayed. The subnet is supplied by MSCS. Continue to enter IP addresses for each installed network until you have populated all desired networks with an IP address.). Note The domain name cannot be a full DNS name. For example, if your DNS name is my-domain-name.com, use my-domain-name in the Domain field. SQL Server does not accept my-domain-name.com in the Domain field... Next Steps - Configure your new SQL Server installation - To reduce the attackable surface area of a system, SQL Server 2005 selectively installs and activates key services and features. For more information on. See Also Other Resources How to: Install SQL Server 2005 from the Command Prompt Help and Information Getting SQL Server 2005 Assistance
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms179530%28v%3Dsql.90%29
2019-04-18T16:43:07
CC-MAIN-2019-18
1555578517745.15
[]
docs.microsoft.com
Contents Now Platform Capabilities Previous Topic Next Topic CMS translation Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share CMS translation You can translate CMS sites by activating internationalization plugins and manually translating custom interface strings. Two tables support the translation of a CMS site into other languages. Translated Name / Field [sys_translated]: Stores strings that are shared or commonly used within a site. These include menu section names, menu item names, site breadcrumb names, link names, and footer menu links. Internationalization plugins typically provide translations for these strings. See Localization settings . Translated Text [sys_translated_text]: Stores unique string translations which you create when you manually translate interface elements. See Translate the interface. View a translated CMS siteActivating an internationalization plugin provides a quick way to see translated strings for CMS menus, breadcrumbs, and links. For a full translation, you must translate the instance manually.Related TasksActivate the Content Management SystemConfigure Content Management sitesRelated ConceptsContent Management designContent Management integration pointsContent Management testingGlobal search in Content Management On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/jakarta-servicenow-platform/page/administer/content-management/concept/c_CMSTranslation.html
2019-04-18T16:50:46
CC-MAIN-2019-18
1555578517745.15
[]
docs.servicenow.com
How distributed system members. New members connect to one of the locators to retrieve the member list, which it uses to join the system. Note: Multiple locators ensure the most stable start up and availability for your distributed system. Standalone Member The standalone member has no peers, does no peer discovery, and so does not use locators. It creates a distributed system connection only to access the GemFire caching features. Running standalone has a faster startup and is appropriate for any member that is isolated from other applications. The primary use case is for client applications. Standalone members can be accessed and monitored if you enable the member to become a JMX Manager. Client Discovery of Servers Locators provide clients with dynamic server discovery and server load balancing. Clients are configured with locator information for the server system, and turn to the locators for directions to the servers to use. The servers can come and go and their capacity to service new client connections can vary. The locators continuously monitor server availability and server load information, providing clients with connection information for the server with the least load at any time. Note: For performance and cache coherency, clients must run as standalone members or in different distributed systems than their servers. You do not need to run any special processes to use locators for server discovery. The locators that provide peer discovery in the server system also provide server discovery for clients to the server system. This is the standard configuration. Multi-site Discovery In a multi-site (WAN) configuration, a GemFire cluster uses locators to discover remote GemFire clusters as well as to discover local GemFire members. Each locator in a WAN configuration uniquely identifies the local cluster to which it belongs, and it can also identify locators in remote GemFire clusters to which it will connect for WAN distribution. When a locator starts up, it contacts each remote locator to exchange information about the available locators and gateway receiver configurations in the remote cluster. In addition to sharing information about its own cluster, a locator shares information that it has obtained from all other connected clusters. Each time a new locator starts up or an existing locator shuts down, the changed information is broadcast to other connected GemFire clusters across the WAN. See Discovery for Multi-Site Systems for more information.
https://gemfire.docs.pivotal.io/95/geode/topologies_and_comm/topology_concepts/how_member_discovery_works.html
2019-04-18T16:17:23
CC-MAIN-2019-18
1555578517745.15
[]
gemfire.docs.pivotal.io
Step 11: Delete Walkthrough Resources After you have completed this walkthrough, perform the following steps to avoid being charged further for AWS resources used in the walkthrough. It's necessary that you do the steps in order, because some resources cannot be deleted if they have a dependency upon another resource. To delete AWS DMS resources On the navigation pane, choose Tasks, choose your migration task ( migratehrschema), and then choose Delete. On the navigation pane, choose Endpoints, choose the Oracle source endpoint ( orasource), and then choose Delete. Choose the Amazon Redshift target endpoint ( redshifttarget), and then choose Delete. On the navigation pane, choose Replication instances, choose the replication instance ( DMSdemo-repserver), and then choose Delete. Next, you must delete your AWS CloudFormation stack, DMSdemo. To delete your AWS CloudFormation stack Sign in to the AWS Management Console and open the AWS CloudFormation console at. If you are signed in as an IAM user, you must have the appropriate permissions to access AWS CloudFormation. Choose your CloudFormation stack, OracletoRedshiftDWusingDMS. For Actions, choose Delete stack. The status of the stack changes to DELETE_IN_PROGRESS while AWS CloudFormation cleans up the resources associated with the OracletoRedshiftDWusingDMS stack. When AWS CloudFormation is finished cleaning up resources, it removes the stack from the list.
https://docs.aws.amazon.com/dms/latest/sbs/CHAP_RDSOracle2Redshift.Steps.DeleteResources.html
2019-04-18T16:45:14
CC-MAIN-2019-18
1555578517745.15
[]
docs.aws.amazon.com
Exports a report to the specified stream in XLS format.. in XLS format using its current XLS-specific export options. These options are represented by the XlsExportOptions object returned by the ExportOptions.Xls property of a report's XtraReport.ExportOptions. This object provides the XlExportOptionsBase.ShowGridLines, XlExportOptionsBase.TextExportMode and other properties which are intended to specify parameters of the resulting XLS file. If you want to ignore current export options of a report, and use your specific settings when a report is exported to XLS, you should use the overloaded XtraReport.ExportToXls method with the options parameter. { } }
https://docs.devexpress.com/XtraReports/DevExpress.XtraReports.UI.XtraReport.ExportToXls(System.IO.Stream)
2019-04-18T16:26:49
CC-MAIN-2019-18
1555578517745.15
[]
docs.devexpress.com
Abstract Metaverse is a blockchain project that provides a foundational infrastructure for social and enterprise needs. Our goal is to construct a universe where digital assets (Metaverse Smart Token, or MST) and digital identities (Avatar) build the basis for asset transactions with the help of a value intermediary (Oracle), thus establishing a new blockchain ecosystem that will transform human society and allow us to enter the New Reality. Unlike other blockchain projects that use technology as an entry point, Metaverse started from an enterprise value creation perspective, with the relationships between people, people and assets as the core foundations of our project. We describe this relationship through the use of BISC (Built-in Smart Contract), which can reduce the technical risks of commercial applications during development and usage. Through BISC, Metaverse provides functionalities in digital assets (MST), digital identities (Avatar), Oracles, and MST exchanges. Through the use of MST, users reap the advantages of blockchain technology, such as the power to generate and distribute their own cryptocurrency. The digital identity Avatar reflects the relationship between people, people and assets, and this Avatar can be linked to MST. Through the use of Avatars, anyone can become value intermediary Oracles, and Oracles can help construct an immutable decentralized system (Reputation). MST can resolve fundamental liquidity issues in asset trading, thus solving a critical problem in any financial system. MST and Avatar are utilized under blockchain technology that is fundamentally integrated with IT systems. This process can be described as BaaS (Blockchain as a Service). BaaS is a quick and convenient way to build blockchain applications. Digital Assets (MST) Digital assets on Metaverse can be characterized as the Metaverse Smart Token, or MST. MST reemphasizes the importance of digital assets, that for smart contracts to work, they need digital assets and not the other way around. Currently, Metaverse has made technical extensions to Bitcoin’s UTXO model. Bitcoin’s UTXO features will be added to MST, including security, traceability, and ACID features. MST gives everyone the ability to issue Bitcoin at the same price. MST can be used for peer-to-peer payments and also supports a variety of financial instruments such as asset additions and asset replacements. Digital Identities (Avatar) Unlike assets such as gold, we are unable to take physical possession of digital assets. Instead, the ownership of digital assets is controlled by individuals through digital identities and secured through mathematical proofs that ensure these identities cannot be forged. As a symbol of a user’s online identity, an Avatar can be used to represent oneself and hold digital assets on the blockchain. Creating an Avatar is far more than giving your public key an alias, just as ID cards and mobile numbers are not an alias for your name. Various pieces of valuable information will be attached to each Avatar’s unique index and encrypted to ensure data privacy. Unless the Avatar’s owner grants authorization (by providing the private key signature, initiating a special transaction, or using smart contracts), users will not have access to encrypted or unencrypted information. Hence, zero-knowledge proofs and homomorphic encryption play a vital role in allowing Avatars to retrieve information such as credit scores and validation results without revealing the contents of a message. Although the Bitcoin system allows a user to hold Bitcoin anonymously using public and private keypairs, most activities in the real world require us to provide some form of personal information: for example, you must provide your age and gender to join a young female entrepreneur’s club. We call the digital identity on the Metaverse Blockchain the Metaverse Avatar.. Consensus Mechanisms Basic Information The blockchain consensus process refers to the process of objectively recording network transaction data in an immutable fashion. Consensus is mainly realized through consensus algorithms. For now, Metaverse will implement two consensus mechanisms. The details are as follows: First phase: PoW Considering ASIC’s centralized mining pool, we chose the ETHASH algorithm as the mining algorithm for Metaverse. The PoW mechanism will be maintained for some time.. Second phase: HBTH-DPoS Although PoW mining can help safeguard Metaverse’s system security in the initial years, it has flaws such as energy waste and the tendency for mining centralization. The DPoS algorithm implemented from Graphene can contribute to a high performing blockchain system..
https://docs.mvs.org/developers/mvs-whitepaper.html
2019-04-18T16:49:58
CC-MAIN-2019-18
1555578517745.15
[]
docs.mvs.org
Extension Points SonarQube provides extension points for its three technical stacks: - Scanner, which runs the source code analysis - Compute Engine, which consolidates the output of scanners, for example by - computing 2nd-level measures such as ratings - aggregating measures (for example number of lines of code of project = sum of lines of code of all files) - assigning new issues to developers - persisting everything in data stores - Web application Extension points are not designed to add new features but to complete existing features. Technically they are contracts defined by a Java interface or an abstract class annotated with @ExtensionPoint. The exhaustive list of extension points is available in the javadoc. The implementations of extension points (named "extensions") provided by a plugin must be declared in its entry point class, which implements org.sonar.api.Plugin and which is referenced in pom.xml : package org.sonarqube.plugins.example; import org.sonar.api.Plugin; public class ExamplePlugin implements Plugin { @Override public void define(Context context) { // implementations of extension points context.addExtensions(FooLanguage.class, ExampleProperties.class); } } <?xml version="1.0" encoding="UTF-8"?> <project> ... <build> <plugins> <plugin> <groupId>org.sonarsource.sonar-packaging-maven-plugin</groupId> <artifactId>sonar-packaging-maven-plugin</artifactId> <extensions>true</extensions> <configuration> <pluginClass>org.sonarqube.plugins.example.ExamplePlugin</pluginClass> </configuration> </plugin> </plugins> </build> </project> Lifecycle A plugin extension exists only in its associated technical stacks. A scanner sensor is for example instantiated and executed only in a scanner runtime, but not in the web server nor in Compute Engine. The stack is defined by the annotations @ScannerSide, @ServerSide (for web server) and @ComputeEngineSide. An extension can call core components or another extension of the same stack. These dependencies are defined by constructor injection : @ScannerSide public class Foo { public void call() {} } // Sensor is a scanner extension point public class MySensor implements Sensor { private final Foo foo; private final Languages languages; // Languages is core component which lists all the supported programming languages. public MySensor(Foo foo, Languages languages) { this.foo = foo; this.languages = languages; } @Override public void execute(SensorContext context) { System.out.println(this.languages.all()); foo.call(); } } public class ExamplePlugin implements Plugin { @Override public void define(Context context) { // Languages is a core component. It must not be declared by plugins. context.addExtensions(Foo.class, MySensor.class); } } It is recommended not to call other components in constructors. Indeed, they may not be initialized at that time. Constructors should only be used for dependency injection. Compilation does not fail if incorrect dependencies are defined, such as a scanner extension trying to call a web server extension. Still it will fail at runtime when plugin is loaded. Third-party Libraries Plugins are executed in their own isolated classloaders. That allows the packaging and use of 3rd-party libraries without runtime conflicts with core internal libraries or other plugins. Note that since version 5.2, the SonarQube API does not bring transitive dependencies, except SLF4J. The libraries just have to be declared in the pom.xml with default scope "compile": <?xml version="1.0" encoding="UTF-8"?> <project> ... <dependencies> ... <dependency> <groupId>commons-codec</groupId> <artifactId>commons-codec</artifactId> <version>1.10</version> </dependency> </dependencies> </project> Technically the libraries are packaged in the directory META-INF/lib of the generated JAR file. An alternative is to shade libraries, for example with maven-shade-plugin. That minimizes the size of the plugin .jar file by copying only the effective used classes. Hint The command mvn dependency:tree gives the list of all dependencies, including transitive ones. Configuration The core component org.sonar.api.config.Configuration provides access to configuration. It deals with default values and decryption of values. It is available in all stacks (scanner, web server, Compute Engine). As recommended earlier, it must not be called from constructors. public class MyRules implements RulesDefinition { private final Configuration config; public MyRules(Configuration config) { this.config = config; } @Override public void define(Context context) { int value = config.getInt("sonar.property").orElse(0); } } Scanner sensors can get config directly from SensorContext, without using constructor injection : public class MySensor extends Sensor { @Override public void execute(SensorContext context) { int value = context.config().getInt("sonar.property").orElse(0); } } In the scanner stack, properties are checked in the following order, and the first non-blank value is the one that is used: - System property - Scanner command-line ( -Dsonar.property=foofor instance) - Scanner tool (<properties> of scanner for Maven for instance) - Project configuration defined in the web UI - Global configuration defined in the web UI - Default value Plugins can define their own properties so that they can be configured from web administration console. The extension point org.sonar.api.config.PropertyDefinition must be used : public class ExamplePlugin implements Plugin { @Override public void define(Context context) { context.addExtension( PropertyDefinition.builder("sonar.my.property") .name("My Property") .description("This is the description displayed in web admin console") .defaultValue("42") .build() ); } } Security Values of the properties suffixed with " .secured" are not available to non-authorized users (anonymous and users without project or global administration rights). " .secured" is needed for passwords, for instance. The annotation @org.sonar.api.Property can also be used on an extension to declare a property, but org.sonar.api.config.PropertyDefinition is preferred. @Properties( @Property(key="sonar.my.property", name="My Property", defaultValue="42") ) public class MySensor implements Sensor { // ... } public class ExamplePlugin implements Plugin { @Override public void define(Context context) { context.addExtension(MySensor.class); } } Logging The class org.sonar.api.utils.log.Logger is used to log messages to scanner output, web server logs/sonar.log, or Compute Engine logs (available from administration web console). It's convenient for unit testing (see class LogTester). import org.sonar.api.utils.log.*; public class MyClass { private static final Logger LOGGER = Loggers.get(MyClass.class); public void doSomething() { LOGGER.info("foo"); } } Internally SLF4J is used as a facade of various logging frameworks (log4j, commons-log, logback, java.util.logging). That allows all these frameworks to work at runtime, such as when they are required for a 3rd party library. SLF4J loggers can also be used instead of org.sonar.api.utils.log.Logger. Read the SLF4J manual for more details. As an exception, plugins must not package logging libraries. Dependencies like SLF4J or log4j must be declared with scope "provided". Exposing APIs to Other Plugins The common use case is to write a language plugin that will allow some other plugins to contribute additional rules (see for example how it is done in the Java plugin). The main plugin will expose some APIs that will be implemented/used by the "rule" plugins. Plugins are loaded in isolated classloaders. It means a plugin can't access another plugin's classes. There is an exception for package names following pattern org.sonar.plugins.<pluginKey>.api. For example all classes in a plugin with the key myplugin that are located in org.sonar.plugins.myplugin.api are visible to other plugins.
https://docs.sonarqube.org/display/DEV/API+Basics
2019-04-18T16:32:45
CC-MAIN-2019-18
1555578517745.15
[]
docs.sonarqube.org
Discussion forums are hugely important tools in running a successful MOOC; they allow for substantive community development, in addition to being excellent sources of feedback and ideas for future iterations of the course. Moderators are the key to effectively managing these online communities. Moderators keep the discussions productive and relay important information (errors, learner confusion with or interest in particular topics, and so on) to the rest of the course team. Discussions can be moderated by any of a number of members of the course team, but dedicating enough time to moderation is the best way to cultivate a successful discussion culture. Feel free to use some or all of the information in this section to guide the contributions of your discussion moderators. Certain types of posts require more attention from the moderators than others, or might need to be handled in a particular way. Check to confirm that there is in fact an error..
https://edx.readthedocs.io/projects/open-edx-building-and-running-a-course/en/open-release-ginkgo.master/manage_discussions/discussion_guidance_moderators.html
2019-04-18T17:17:33
CC-MAIN-2019-18
1555578517745.15
[]
edx.readthedocs.io
The percentage of learners who access MOOCs using smartphones is increasing every day. Most courses on edx.org can be viewed on smartphones using the edX Android and iPhone apps, although we still recommend that learners complete graded assignments on a desktop computer, depending on the type of assessments that their courses include. For information on which exercises and tools are mobile-ready, see the table in the Introduction to Exercises and Tools section. To make the course experience for mobile learners as rewarding as it is for learners using desktop computers, keep the following best practices in mind as you design, test, and run your course. If you have included some of the more complex problem types, or have highly customized the way course content displays, edX recommends that you test your course for multiple devices and displays. To test the mobile experience of your course, sign in to your course using the edX Android or iPhone app, and view each course unit to make sure that it renders as you expect it to. Note Keep in mind that course updates that you make might not be immediately reflected in the edX mobile apps. In particular, newly published content can take up to an hour to update on the Android app.
https://edx.readthedocs.io/projects/open-edx-building-and-running-a-course/en/open-release-ginkgo.master/reaching_learners/design_for_mobile.html
2019-04-18T17:15:36
CC-MAIN-2019-18
1555578517745.15
[]
edx.readthedocs.io
Upgrading to a new Morepath version¶ Morepath keeps a detailed changelog (CHANGES) that describes what has changed in each release of Morepath. You can learn about new features in Morepath this way, but also about things in your code that might possibly break. Pay particular attention to entries marked Breaking change, Deprecated and Removed. Breaking change means that you have to update your code as described if you use this feature of Morepath. Deprecated means that your code won’t break yet but you get a deprecation warning instead. You can then upgrade your code to use the newer APIs. You can show deprecation warnings by passing the following flag to the Python interpreter when you run your code: $ python -W error::DeprecationWarning If you use an entry point to create a command-line tool you will have to supply your Python interpreter manually: $ python -W error::DeprecationWarning the_tool You can also turn these on in your code: import warnings warnings.simplefilter('always', DeprecationWarning) It’s also possible to turn deprecation warnings into an error: import warnings warnings.simplefilter('error', DeprecationWarning) A Deprecated entry in the changelog changes into a Removed in a future release; we are not maintaining deprecation warnings forever. If you see a Removed entry, it pays off to run your code with deprecation warnings turned on before you upgrade to this version.
https://morepath.readthedocs.io/en/latest/upgrading.html
2019-04-18T16:27:16
CC-MAIN-2019-18
1555578517745.15
[]
morepath.readthedocs.io
Table of Contents Wi-Fi to eth (bridge) routing This Howto describes, how to interconnect wireless and wired network interfaces on the same Linux computer, to enable unmodified TCP/IP packets to pass from one interface to the other. In other places this is mentioned as network bridge or Wi-Fi line extender or Wi-Fi Internet share. The reason for this HOWTO: the word bridge is misleading For a network bridge we assume a device that transfers unmodified network packets from one network connection to the other. One can create a bridge device (virtual) and add members to it. This works only for bridge members of type wired - eth network cards. A network Bridge “connects” members on level 3 of the OSI model. That means communication on TCP/IP level. When you want to add a Wi-Fi device to the bridge, you hit a barrier: Wi-Fi devices communicate on level 2 of the OSI model. You can find many manuals on the Internet that document how to circumvent this (in the form of putting the Wi-Fi card in 4addr mode). This simply DOES NOT WORK! The Wi-Fi network card (member of the bridge) authenticates and connects to the Wireless Access Point (AP), but TCP/IP packets do not travel over the connection. So searching for “wifi eth bridge” does not return any useful solution. The culprit is the word “bridge”. General solution A working solution is “Proxy ARP Routing”. You simply enable IP forwarding and then for every device connected to the wired (eth) side of a “bridge” you have to add a routing line to the routing table. This can be automated by a program like parprouted - the Proxy ARP routing daemon. Solution for Slackware, step-by-step Tested and working on Slackware64-14.2, kernel-4.11.6, CPU i5-7200 This solution is for static IP addresses. See below link of original source for a scenario that uses DHCP. Assumptions: We want to interconnect one Wi-Fi and one wired (eth) network card - the network devices wlan0 and eth0. Prepare Slackware box so that you are able to communicate over the Wi-Fi adapter (using NetworkManager, rc.inet1 or other means…), making sure that the wired (eth) adapter is not being used. I had set up WPA2 AES verification with NetworkManager to get a usable wpa_supplicant.conf configuration file which I used later with rc.inet1. Disable all on-boot network configurations (i.e. make sure that rc.networkmanager or other files for network setup are not executable) and set rc.inet1 executable. - IP forwarding must be enabled in the kernel (since the 2.1 release the Linux kernel does not require an explicit compilation option for this) - download & compile & install - edit /etc/rc.d/rc.inet1.confso to enable wlan0and eth0. Assign them static IP addresses and set wlan0to the lowest index and connect to the AP Below are example lines from /etc/rc.d/rc.inet1.conf- the only ones without comment sign “#” at the beginning and for WPA2 Wi-Fi authentication IFNAME[1]="eth0" IPADDR[1]="10.200.200.223" NETMASK[1]="255.255.255.0" GATEWAY="10.200.200.1" DEBUG_ETH_UP="no" IFNAME[0]="wlan0" IPADDR[0]="10.200.200.222" NETMASK[0]="255.255.255.0" USE_DHCP[0]="" DHCP_HOSTNAME[0]="" WLAN_MODE[0]=Managed WLAN_ESSID[0]="R7500" WLAN_WPA[0]="wpa_supplicant" WLAN_WPADRIVER[0]="wext" - set /etc/rc.d/rc.ip_forwardexecutable: # chmod +x /etc/rc.d/rc.ip_forward - add a line /usr/local/sbin/parprouted wlan0 eth0 to /etc/rc.d/rc.localand make sure that this file is executable That's all. Reboot and you have a working Wi-Fi - eth bridge, also called Wi-Fi extender or Wi-Fi Internet share. Note on parprouted compilation: The parprouted man page section “Requirements” says: “parprouted requires the “ip” program from iproute2 tools to be installed in /sbin. If it is installed in another location, please replace ”/sbin/ip“ occurrences in the source with the correct path”. Slackware installs the ip program as /sbin/ip so you should be OK. DHCP enabled variant Look below for a solution in a source link. Sources * Written by Zdenko Dolar, August 2017 * Original source:
http://docs.slackware.com/howtos:network_services:wifi_to_eth_bridge_routing
2019-04-18T17:04:30
CC-MAIN-2019-18
1555578517745.15
[array(['https://docs.slackware.com/lib/plugins/bookcreator/images/add.png', None], dtype=object) array(['https://docs.slackware.com/lib/plugins/bookcreator/images/del.png', None], dtype=object) ]
docs.slackware.com
Book Creator Add this page to your book Book Creator Remove this page from your book The soon-to-be-released Slackware 14 Already out, please correct. Several suggestions here. - This article has no author. Whoever wrote it should also sign it. - Eventually, include a pointer to an appropriate login manager like XWM or SLiM (article already exists). - Here's an excellent article about Xfce, published last week by outstanding IT author Carla Schroder. Maybe include the link at the bottom of the page? Niki Kovacs Tue Sep 4 06:24:09 CEST 2012 I agree that the authors should give themselves credit or refer to the original source. Perhaps make a topic search box in the bottom of the document pointing to login managers to simplify users' efforts, such as {{topic>howtos +software +login_manager&nodate&desc&sort&table&tags}} — Matthew Fillpot 2012/09/03 23:12 Looks like this article was originally created by kookiemonster. — V. T. Eric Layton 2012/09/04 10:50 That's true. Will therefore correct this soon. KookieMonster
http://docs.slackware.com/talk:slackware:xfce
2019-04-18T17:04:08
CC-MAIN-2019-18
1555578517745.15
[array(['https://docs.slackware.com/lib/plugins/bookcreator/images/add.png', None], dtype=object) array(['https://docs.slackware.com/lib/plugins/bookcreator/images/del.png', None], dtype=object) ]
docs.slackware.com
connect connect Connect to a jmx-manager either directly or via a locator. If you are connecting via a locator, and a jmx-manager does not already exist, the locator starts one. gfsh connects as a discovery client to the locator service and asks where the JMX Manager is. The locator knows when there is no member currently configured as the JMX manager and simply starts up the JMX manager service within itself. gfsh connects as a JMX client to locator JMX RMI port. You can also connect to a remote locator using the HTTP protocol, as illustrated by the second example below. Availability: Offline. You will receive a notification "Already connected to: host[port]" if you are already connected. connect [--locator=value] [--jmx-manager=value] [--use-http(=value)?] [--url=value] [--user=value][--password=value] [--key-store=value] [--key-store-password=value] [--trust-store=value] [--trust-store-password=value] [--ciphers=value] [--protocols=value] [--security-properties-file=value] [--use-ssl(=value)?] Example Commands: gfsh>connect gfsh>connect Connecting to Locator at [host=localhost, port=10334] .. Connecting to Manager at [host=GemFireStymon, port=1099] .. Successfully connected to: [host=GemFireStymon, port=1099]}."
http://gemfire82.docs.pivotal.io/docs-gemfire/latest/tools_modules/gfsh/command-pages/connect.html
2019-04-18T16:30:31
CC-MAIN-2019-18
1555578517745.15
[]
gemfire82.docs.pivotal.io
Searching data / Accessing data tables / Run a search using a finder / Use a custom finder / Assign a custom finder to a roleDownload as PDF. As the Admin role already has permission to access all finders and all data tables, you do not have the option to assign it a custom finder. Go to Administration → Roles. The Roles Management window opens. Select the role from the lateral navigation tabs. Select the custom finder in the Finders drop-down list. Note that the Finders drop-down list will not appear if the role has not permission to use.
https://docs.devo.com/confluence/ndt/searching-data/accessing-data-tables/run-a-search-using-a-finder/use-a-custom-finder/assign-a-custom-finder-to-a-role
2019-04-18T16:17:39
CC-MAIN-2019-18
1555578517745.15
[]
docs.devo.com
Energy sector backs electric vehicle drive A new report from Energy UK has today highlighted how the whole country can benefit from the move to electric vehicles (EVs) with a call to accelerate the infrastructure and support which will allow the roll out to go further and faster. With the government planning to ban the sale of petrol and diesel cars by 2040, the Electric Vehicle Revolution report says that the UK must now speed up its progress to a future where cleaner and more efficient transport can transform air quality, boost manufacturing and even contribute to meeting energy demand. EVs can point the way to a future where greater flexibility and new technology transforms the way consumers use energy. With over 105,000 EVs now on the UK roads in August 2017, approximately 600 million UK vehicle miles per year are now powered by electricity. EVs are cleaner than ever before – now emitting around half of the C02 of the cleanest non-electric cars. The report - launched at an Energy UK event this morning featuring speakers from E.ON, the Office for Low Emission Vehicles (OLEV) and EV charging point provider Pod Point - is the first of a series from Energy UK on electric vehicles. It sets out findings from Energy UK’s Electric Vehicle Working Group on how greater collaboration across the energy, automotive and technology industries are essential to the rapid expansion of EVs and makes recommendations on areas where support and direction from government are particularly required: - A regulatory framework which provides certainty for future investment and supports and incentivises the development of charging infrastructure through the forthcoming Electric and Autonomous Vehicle Bill and Clean Growth Plan. - More support for innovation and the ability to share usage data to assist with future infrastructure planning. - Developing smart charging arrangements to manage demand through, for example, time of use tariffs. - Backing solutions which ensure benefits, ease of use and freedom for EV owners., which includes representatives from British Gas, Ecotricity, E.ON, EDF Energy, ESB, Haven Power, National Grid, Npower, OVO Energy, Scottish Power and SSE, will be producing a series of reports over the next year to highlight how electric vehicles can be successfully integrated into the energy system. Its next report to be published in late 2017 will look in more detail at the design characteristics of smarter, more integrated power grids and how to mitigate the demand risk caused by electric vehicles. ENDS Note to editors
https://docs.energy-uk.org.uk/media-and-campaigns/press-releases/370-2017/6281-energy-sector-backs-electric-vehicle-drive.html
2019-04-18T16:17:18
CC-MAIN-2019-18
1555578517745.15
[]
docs.energy-uk.org.uk
-db = tigase-custom --auth-db-uri = jdbc:mysql://localhost/drupal?user=user&password=passwd That’s it. The connector loads correctly and starts working using predefined, default list of queries. In most cases you also might want to define your own queries in the configuration file. The shortest possible description is the following example of the content from the init.properties file: # This query is used to check connection to the database, whether it is still alive or not basic-conf/auth-repo-params/conn-valid-query=select 1 # This is database initialization query, normally we do not use it, especially in # clustered environment basic-conf/auth-repo-params. # The Tigase checks whether the JID returned from the query matches # JID passed as a parameter. If they match, the authentication is successful. basic-conf/auth-repo-params/user-login-query={ call TigUserLoginPlainPw(?, ?) } # Below query returns number of user accounts in the database, this is mainly used # for the server metrics and monitoring components. basic-conf/auth-repo-params/users-count-query={ call TigAllUsersCount() } # Below query is used to add a new user account to the database basic-conf/auth-repo-params/add-user-query={ call TigAddUserPlainPw(?, ?) } # Below query is used to remove existing account with all user's data from the database basic-conf/auth-repo-params. basic-conf/auth-repo-params/get-password-query=select user_pw from tig_users where user_id = ? # Below query is used for user password update in case user decides to change his password basic-conf/auth-repo-params/update-password-query=update tig_users set user_pw = ? where user_id = ? # Below query is called on user logout event. Usually we use a stored procedure which # records user logout time and marks user as offline in the database basic-conf/auth-repo-params/user-logout-query=update tig_users, set online_status = online_status - 1 where user_id = ? # This is configuration setting to specify what non-sasl authentication mechanisms # expose to the client basic-conf/auth-repo-params/non-sasl-mechs=password,digest # This is configuration setting to specify what sasl authentication mechanisms expose to the client basic-conf/auth-repo-params. The first example shows how to put a stored procedure as a query with 2 required parameters. add-user-query={ call TigAddUserPlainPw(?, ?) } The same query with plain SQL parameters instead: add-user-query=insert into users (user_id, password) values (?, ?) The order of the query arguments is important and must be exactly as described in specification for each parameter. 'conn-valid-query' - Query executing periodically to ensure active connection with the database. Takes no arguments. Example query: 'select 1' 'init-db-query' - Database initialization query which is run after the server is started. Takes no arguments. Example query: 'update tig_users set online_status = 0' 'add-user-query' - Query adding a new user to the database. Takes 2 arguments: (user_id (JID), password) Example query: 'insert into tig_users (user_id, user_pw) values (?, ?)' 'del-user-query' - Removes a user from the database. Takes 1 argument: (user_id (JID)) Example query: 'delete from tig_users where user_id = ?' 'get-password-query' - Retrieves user password from the database for given user_id (JID). Takes 1 argument: (user_id (JID)) Example query: 'select user_pw from tig_users where user_id = ?' 'update-password-query' - Updates (changes) password for a given user_id (JID). Takes 2 arguments: (password, user_id (JID)) Example query: 'update tig_users set user_pw = ? where user_id = ?' 'user-login-query' - Performs user login. Normally used when there is a special SP used for this purpose. This is an alternative way to a method requiring retrieving user password. Therefore at least one of those queries must be defined: user-login-query or get-password-query. If both queries are defined then user-login-query is used. Normally this method should be only used with plain text password authentication or sasl-plain. Tigase expects a result set with user_id to be returned from the query if login is successful and empty results set if the login is unsuccessful. Takes 2 arguments: (user_id (JID), password) Example query: 'select user_id from tig_users where (user_id = ?) AND (user_pw = ?)' 'user-logout-query' - This query is called when user logs out or disconnects. It can record that event in the database. Takes 1 argument: (user_id (JID)) Example query: 'update tig_users, set online_status = online_status - 1 where user_id = ?' - 'non-sasl-mechs' - Comma separated list of NON-SASL authentication mechanisms. Possible mechanisms are: password and digest. The digest mechanism can work only with get-password-query active and only when password are stored in plain text format in the database. 'sasl-mechs' - Comma separated list of SASL authentication mechanisms. Possible mechanisms are all mechanisms supported by Java implementation. The most common are: PLAIN, DIGEST-MD5, CRAM-MD5. "Non-PLAIN" mechanisms will work only with the get-password-query active and only when passwords are stored in plain text format in the database. Application: Tigase Server
https://docs.tigase.net/tigase-server/7.1.5/Administration_Guide/webhelp/custonAuthConnector.html
2019-04-18T16:46:14
CC-MAIN-2019-18
1555578517745.15
[]
docs.tigase.net
Two further features which can improve the visual realism obtained from Reflection ProbesA rendering component that captures a spherical view of its surroundings in all directions, rather like a camera. The captured image is then stored as a Cubemap that can be used by objects with reflective materials. More info See in Glossary are described below: Interreflections and Box Projection. You may have seen a situation where two mirrors are placed fairly close together and facing each other. Both mirrors reflect not only the mirror opposite but also the reflections produced by that mirror. The result is an endless progression of reflections between the two; reflection between objects like this are known as Interreflections. Reflection probes create by taking a snapshot of the view from their position. However, with a single snapshot, the view cannot show interreflections and so additional snapshots must be taken for each stage in the interreflection sequence. The number of times that a reflection can “bounce” back and forth between two objects is controlled in the Lighting window; go to Environment > Environment Reflections and edit the Bounces property. This is set globally for all probes, rather than individually for each probe. With a reflection bounce count of 1, reflective objects viewed by a probe are shown as black. With a count of 2, the first level of interreflection are visible, with a count of 3, the first two levels will be visible, and so on. Note that the reflection bounce count also equals the number of times the probe must be baked with a corresponding increase in the time required to complete the full bake. You should therefore set the count higher than one only when you know that reflective objects will be clearly visible in one or more probes. Normally, the reflection cubemap is assumed to be at an infinite distance from any given object. Different angles of the cubemap will be visible as the object turns but it is not possible for the object to move closer or farther away from the reflected surroundings. This often works very well for outdoor scenesA Scene contains the environments and menus of your game. Think of each unique Scene file as a unique level. In each Scene, you place your environments, obstacles, and decorations, essentially designing and building your game in pieces. More info See in Glossary but its limitations show in an indoor scene; the interior walls of a room are clearly not an infinite distance away and the reflection of a wall should get larger the closer the object gets to it. The Box Projection option allows you to create a reflection cubemap at a finite distance from the probe, thus allowing objects to show different-sized reflections according to their distance from the cubemap’s walls. The size of the surrounding cubemap is determined by the probes zone of effect, as determined by its Box Size property. For example, with a probe that reflects the interior of a room, you should set the size to match the dimensions of the room. Globally, you can enable Box Projection from Project SettingsA broad collection of settings which allow you to configure how Physics, Audio, Networking, Graphics, Input and many other areas of your Project behave. More info See in Glossary > Graphics > Tier Settings, but the option can be turned off from the Reflection Probe inspector for specific Reflection Probes when infinite projection is desired. Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/Manual/AdvancedRefProbe.html
2019-04-18T16:29:53
CC-MAIN-2019-18
1555578517745.15
[]
docs.unity3d.com
#include <wx/grid.h> This event class contains information about various grid events. Notice that all grid event table macros are available in two versions: EVT_GRID_XXX and EVT_GRID_CMD_XXX. The only difference between the two is that the former doesn't allow to specify the grid window identifier and so takes a single parameter, the event handler, but is not suitable if there is more than one grid control in the window where the event table is used (as it would catch the events from all the grids). The version with CMD takes the id as first argument and the event handler as the second one and so can be used with multiple grids as well. Otherwise there are no difference between the two and only the versions without the id are documented below for brevity. The following event handler macros redirect the events to member function handlers 'func' with prototypes like: Event macros: wxEVT_GRID_CELL_CHANGINGevent type. wxEVT_GRID_CELL_CHANGEDevent type. wxEVT_GRID_CELL_LEFT_CLICKevent type. wxEVT_GRID_CELL_LEFT_DCLICKevent type. wxEVT_GRID_CELL_RIGHT_CLICKevent type. wxEVT_GRID_CELL_RIGHT_DCLICKevent type. wxEVT_GRID_EDITOR_HIDDENevent type. wxEVT_GRID_EDITOR_SHOWNevent type. wxEVT_GRID_LABEL_LEFT_CLICKevent type. wxEVT_GRID_LABEL_LEFT_DCLICKevent type. wxEVT_GRID_LABEL_RIGHT_CLICKevent type. wxEVT_GRID_LABEL_RIGHT_DCLICKevent type. wxEVT_GRID_SELECT_CELLevent type. wxEVT_GRID_COL_MOVEevent type. wxEVT_GRID_COL_SORTevent type. Default constructor. Constructor for initializing all event attributes. Returns true if the Alt key was down at the time of the event. Returns true if the Control key was down at the time of the event. Column at which the event occurred. Notice that for a wxEVT_GRID_SELECT_CELL event this column is the column of the newly selected cell while the previously selected cell can be retrieved using wxGrid::GetGridCursorCol(). Position in pixels at which the event occurred. Row at which the event occurred. Notice that for a wxEVT_GRID_SELECT_CELL event this row is the row of the newly selected cell while the previously selected cell can be retrieved using wxGrid::GetGridCursorRow(). Returns true if the Meta key was down at the time of the event. Returns true if the user is selecting grid cells, or false if deselecting. Returns true if the Shift key was down at the time of the event.
https://docs.wxwidgets.org/3.0/classwx_grid_event.html
2019-04-18T16:26:36
CC-MAIN-2019-18
1555578517745.15
[]
docs.wxwidgets.org
changes.mady.by.user jti user Saved on Jan 04, 2019 The JWST NIRISS optical path consists of a pick-off mirror, a collimator, pupil and filter wheels, and a camera that focuses the light onto the detector. Parent page: NIRISS Instrumentation NIRISS has an all-reflective optical design that consists of: The optical path is illustrated schematically in Figure 1. The optical assembly is attached to an aluminum optical bench, which is shared with the Fine Guidance Sensor. Three kinematic mounts, made of titanium, attach the optical assembly to the structure of the Integrated Science Instrument Module (ISIM). Figure 1. Optical path of NIRISS A schematic plot showing the optical layout of NIRISS. Source: Honeywell. Figure 2. Solid body rendering of NIRISS Solid body rendering of NIRISS. Key components of the optical path are labelled. The kinematic mounts fasten NIRISS to the ISIM structure. Source: Honeywell. Figure 3. NIRISS flight hardware A photograph of the NIRISS flight hardware, at the Goddard Space Flight Center in December 2014, take between the second and third cryovacuum test campaigns. Compare it with Figure 2, where key components are identified. The path between the POM (which is not visible in this image) and the collimator three-mirror assembly (TMA) is shielded by internal baffling. Source: Honeywell. The pick-off mirror (POM) is a flat mirror composed of an aluminum substrate coated with nickel plating. Light from the fine steering mirror of the JWST Optical Telescope Element (OTE) is focused onto the NIRISS POM and directed into the instrument. The POM is mounted on a movable stage that serves as the course focus mechanism (CFM) for NIRISS. It has four coronagraphic occulters engraved in its surface. These deep, cone-shaped holes in the nickel overcoat are remnants of the original tunable filter imager (TFI) configuration of the instrument. Although NIRISS does not have a coronagraphic mode, these occulters will nevertheless leave their imprint on all images of externally-illuminated sources. When projected onto the detector, the occulters appear as circular spots with diameters of 0.58", 0.75", 1.5", and 2.0" (approximately 9, 11. 23. and 31 pixels, respectively), with positions that depend slightly on the focus. The NIRISS detector is a single Hawaii 2RG sensor chip array with 2048 × 2048 pixels. It provides a field of view of 2.2" × 2.2" with a plate scale of approximately 0.065 "/pixel. JWST technical documents.
https://jwst-docs.stsci.edu/pages/diffpages.action?pageId=23136516&originalId=46013485
2019-04-18T16:33:01
CC-MAIN-2019-18
1555578517745.15
[]
jwst-docs.stsci.edu
Real User Monitoring (RUM) Configuration As technology shifts to hybrid environments, it becomes more and more important to monitor from within the client itself. Real user monitoring is typically "passive monitoring" – that is, it collects web traffic without having any effect on the operation of the site. This differs from synthetic monitoring with automated web browsers in that it relies on actual inbound and outbound web traffic to take measurements. Instart's service provides a RUM capability. If enabled for a customer, baseline page load time information is exposed through the customer portal on the Analytics > Load Times pane. All raw data collected by the Nanovisor is available via log delivery for in-depth analysis. Page load time information comes from the Navigation Timing API in modern (HTML 5-compliant) browsers. Navigation Timing is a JavaScript API that provides a simple way to get accurate and detailed timing statistics for page navigation and load events. When RUM is enabled, the Nanovisor collects the Navigation Timing metrics when it runs, and periodically encodes this data into a query string which it appends to a request for blank.gif from the service, thereby beaconing the most recent timing data back for collection. The service, upon receiving the request, then decodes the query string and puts the values into both the statistics database used by the Customer Portal and the access log line (see below) stored for the request. Here's an example query string with the Navigation Timing data (with line breaks added for readability): ?beacon={"id":"beacon","timing":{"loadEventEnd":1421885826360, "loadEventStart":1421885826222,"domComplete":1421885826211, "domContentLoadedEventEnd":1421885825814,"domContentLoadedEventStart":1421885825814, "domInteractive":1421885825814,"domLoading":1421885823994, "responseEnd":1421885825035,"responseStart":1421885823280, "requestStart":1421885823280,"secureConnectionStart":0,"connectEnd":1421885823280, "connectStart":1421885823280,"domainLookupEnd":1421885823280, "domainLookupStart":1421885823280,"fetchStart":1421885823280,"redirectEnd":0, "redirectStart":0,"unloadEventEnd":0,"unloadEventStart":0, "navigationStart":1421885823279},"nav":{"redirectCount":0,"type":0}} For the portal Load Times display, we calculate the following metrics: - Time to Page Load – the time between when the browser sends its request for the web page assets (that is, after the DNS lookups are finished and the TCP connection has been established) until the load event occurs. This is simply the difference between the time of the loadEventStart event and the requestStart event. - Time to Interactive – the time between when the browser sends its request for the web page assets (that is, after the DNS lookups are finished and the TCP connection has been established) until the page becomes responsive to user interactions. This is simply the difference between the time of the domInteractive event and the requestStart event. Configuration To enable RUM in the property configuration, add a monitoring block in an action block. The nv block with target set to auto or rum is also required, and a extra_flags block: "nv": { "injection": true, "release": "latest", "target": "auto", "client": { "extra_flags": "{\"disableQuerySelectorInterception\" :true, 'rumDataConfigKey':'/instartlogic/clientdatacollector/getconfig/monitorprod.json','custName':'acmestores','propName':'anvilworld'}" }, "serve_from_parent_domain": true }, The value of rumDataConfigKey determines the config file for this customer. By default we use a common file ( monitorprod.json). If you want to use a customer-specific file, please contact Support. Important The use of the extra_flag parameters is required for the period of time that the RUM v1 and v2 data pipelines are both active. If extra_flags is not present, the v1 pipeline is used. Eventually, we will retire the v1 pipeline and the need for the extra_flags parameters. There is also another parameter that can be specified in the monitoring block – max_error_beacons, which specifies the maximum number of JavaScript errors that will be returned from the client. By default, this is 0.
https://docs.instart.com/apis/property-configuration-api-guide/real-user-monitoring-(rum)-configuration/
2019-04-18T16:25:10
CC-MAIN-2019-18
1555578517745.15
[]
docs.instart.com
Consistency levels in Azure Cosmos DB Distributed databases that rely on replication for high availability, low latency, or both, make the fundamental tradeoff between the read consistency vs. availability, latency, and throughput. Most commercially available distributed databases ask developers to choose between the two extreme consistency models: strong consistency and eventual consistency. The linearizability or the strong consistency model is the gold standard of data programmability. But it adds a price of higher latency (in steady state) and reduced availability (during failures). On the other hand, eventual consistency offers higher availability and better performance, but makes it hard to program applications. Azure Cosmos DB approaches data consistency as a spectrum of choices instead of two extremes. Strong consistency and eventual consistency are at the ends of the spectrum, but there are many consistency choices along the spectrum. Developers can use these options to make precise choices and granular tradeoffs with respect to high availability and performance. With Azure Cosmos DB, developers can choose from five well-defined consistency models on the consistency spectrum. From strongest to more relaxed, the models include strong, bounded staleness, session, consistent prefix, and eventual consistency. The models are well-defined and intuitive and can be used for specific real-world scenarios. Each model provides availability and performance tradeoffs and is backed by the SLAs.. Scope of the read consistency Read consistency applies to a single read operation scoped within a partition-key range or. Guarantees associated with consistency levels The comprehensive SLAs provided by Azure Cosmos DB guarantee that 100 percent of read requests meet the consistency guarantee for any consistency level you choose. A read request meets the consistency SLA if all the consistency guarantees associated with the consistency level are satisfied. The precise definitions of the five consistency levels in Azure Cosmos DB using the TLA+ specification language are provided in the azure-cosmos-tla GitHub repo. The semantics of the five consistency levels are described here: Strong: Strong consistency offers a linearizability guarantee. The reads are guaranteed to return the most recent committed version of an item. A client never sees an uncommitted or partial write. Users are always guaranteed to read the latest committed write. Bounded staleness: The reads are guaranteed to honor the consistent-prefix guarantee. The reads might lag behind writes by at most "K" versions (i.e., "updates") of an item or by "T" time interval. In other words, when you choose bounded staleness, the "staleness" can be configured in two ways: - The number of versions (K) of the item - The time interval (T) by which the reads might lag behind the writes Bounded staleness offers total global order except within the "staleness window." The monotonic read guarantees exist within a region both inside and outside the staleness window. Strong consistency has the same semantics as the one offered by bounded staleness. The staleness window is equal to zero. Bounded staleness is also referred to as time-delayed linearizability. When a client performs read operations within a region that accepts writes, the guarantees provided by bounded staleness consistency are identical to those guarantees by the strong consistency. Session: The reads are guaranteed to honor the consistent-prefix (assuming a single “writer” session), monotonic reads, monotonic writes, read-your-writes, and write-follows-reads guarantees. Session consistency is scoped to a client session. Consistent prefix: Updates that are returned contain some prefix of all the updates, with no gaps. Consistent prefix consistency level guarantees that reads never see out-of-order writes. Eventual: There's no ordering guarantee for reads. In the absence of any further writes, the replicas eventually converge. Consistency levels explained through baseball Let's take a baseball game scenario as an example. Imagine a sequence of writes that represent the score from a baseball game. The inning-by-inning line score is described in the Replicated data consistency through baseball paper. This hypothetical baseball game is currently in the middle of the seventh inning. It's the seventh-inning stretch. The visitors are behind with a score of 2 to 5 as shown below: An Azure Cosmos container holds the run totals for the visitors and home teams. While the game is in progress, different read guarantees might result in clients reading different scores. The following table lists the complete set of scores that might be returned by reading the visitors' and home scores with each of the five consistency guarantees. The visitors' score is listed first. Different possible return values are separated by commas. Additional reading To learn more about consistency concepts, read the following articles: - High-level TLA+ specifications for the five consistency levels offered by Azure Cosmos DB - Replicated Data Consistency Explained Through Baseball (video) by Doug Terry - Replicated Data Consistency Explained Through Baseball (whitepaper) by Doug Terry - Session guarantees for weakly consistent replicated data - Consistency Tradeoffs in Modern Distributed Database Systems Design: CAP is Only Part of the Story - Probabilistic Bounded Staleness (PBS) for Practical Partial Quorums - Eventually Consistent - Revisited Next steps To learn more about consistency levels in Azure Cosmos DB, read the following articles: Feedback Send feedback about:
https://docs.microsoft.com/en-ca/azure/cosmos-db/consistency-levels
2019-04-18T16:31:04
CC-MAIN-2019-18
1555578517745.15
[array(['media/consistency-levels/five-consistency-levels.png', 'Consistency as a spectrum'], dtype=object) ]
docs.microsoft.com
Tasks for builds and releases Azure Pipelines | Azure DevOps Server 2019 | TFS 2018 | TFS 2017 | TFS 2015 Note In Microsoft Team Foundation Server (TFS) 2018 and previous versions, build and release pipelines are called definitions, service connections are called service endpoints, stages are called environments, and jobs are called phases. A task is the building block for defining automation in a build pipeline, or in a stage of a release pipeline. A task is simply a packaged script or procedure that has been abstracted with a set of inputs. When you add a task to your build or release pipeline, it may also add a set of demands to the pipeline. a fully-qualified name for the custom task to avoid this risk: steps: - task: myPublisherId.myExtensionId.myTaskName@1 Task versions Tasks are versioned, and you must specify the major version of the task used in your pipeline. pipeline and manually change to the new major version. The build or release log will include an alert that a new major version is available.. Control options are available as keys on the task section. - task: string # reference to a task and version, e.g. "VSBuild@1" condition: expression # see below continueOnError: boolean # 'true' if future steps should run even if this step fails; defaults to 'false' enabled: boolean # whether or not to run this step; defaults to 'true' timeoutInMinutes: number # how long to wait before timing out the task The timeout period begins when the task starts running. It does not include the time the task is queued or is waiting for an agent. Note For the full schema, see YAML schema for task. Conditions Only when all previous tasks have succeeded Even if a previous task has failed, unless the build or release was canceled Even if a previous task has failed, even if the build was canceled Only when a previous task has failed - Custom conditions which are composed of expressions YAML pipelines aren't available in TFS. Build tool installers (Azure Pipelines) Azure Pipelines preview feature To use this capability you must be working on Azure Pipelines and enable the Task tool installers preview feature. Tip Want a visual walkthrough? See our April 19 news release.. Related topics Help and support - See our troubleshooting page. - Report any problems on Developer Community, get advice on Stack Overflow, and get support via our Support page. Feedback Send feedback about:
https://docs.microsoft.com/en-us/azure/devops/pipelines/process/tasks?view=azure-devops
2019-04-18T16:38:39
CC-MAIN-2019-18
1555578517745.15
[]
docs.microsoft.com
Recordset Object (ADO) Represents the entire set of records from a base table or the results of an executed command. At any time, the Recordset object refers to only a single record within the set as the current record. Remarks You use Recordset objects to manipulate data from a provider. When you use ADO, you manipulate data almost entirely using Recordset objects. All Recordset objects consist of records (rows) and fields (columns). Depending on the functionality supported by the provider, some Recordset methods or properties may not be available. ADODB.Recordset is the ProgID that should be used to create a Recordset object. Existing applications that reference the outdated ADOR.Recordset ProgID will continue to work without recompiling, but new development should reference ADODB.Recordset. There are four different cursor types defined in ADO: Dynamic cursor Allows you to view additions, changes, and deletions by other users; allows all types of movement through the Recordset that doesn't rely on bookmarks; and allows bookmarks if the provider supports them. Keyset cursor Behaves like a dynamic cursor, except that it prevents you from seeing records that other users add, and prevents access to records that other users delete. Data changes by other users will still be visible. It always supports bookmarks and therefore allows all types of movement through the Recordset. Static cursor Provides a static copy of a set of records for you to use to find data or generate reports; always allows bookmarks and therefore allows all types of movement through the Recordset. Additions, changes, or deletions by other users will not be visible. This is the only type of cursor allowed when you open a client-side Recordset object. Forward-only cursor Allows you to only scroll forward through the Recordset. Additions, changes, or deletions by other users will not be visible. This improves performance in situations where you need to make only a single pass through a Recordset. Set the CursorType property prior to opening the Recordset to choose the cursor type, or pass a CursorType argument with the Open method. Some providers don't support all cursor types. Check the documentation for the provider. If you don't specify a cursor type, ADO opens a forward-only cursor by default. If the CursorLocation property is set to adUseClient to open a Recordset, the UnderlyingValue property on Field objects is not available in the returned Recordset object. When used with some providers (such as the Microsoft ODBC Provider for OLE DB in conjunction with Microsoft SQL Server), you can create Recordset objects independently of a previously defined Connection object by passing a connection string with the Open method. ADO still creates a Connection object, but it doesn't assign that object to an object variable. However, if you are opening multiple Recordset objects over the same connection, you should explicitly create and open a Connection object; this assigns the Connection object to an object variable. If you do not use this object variable when opening your Recordset objects, ADO creates a new Connection object for each new Recordset, even if you pass the same connection string. You can create as many Recordset objects as needed. When you open a Recordset, the current record is positioned to the first record (if any) and the BOF and EOF properties are set to False. If there are no records, the BOF and EOF property settings are True. You can use the MoveFirst, MoveLast, MoveNext, and MovePrevious methods; the Move method; and the AbsolutePosition, AbsolutePage, and Filter properties to reposition the current record, assuming the provider supports the relevant functionality. Forward-only Recordset objects support only the MoveNext method. When you use the Move methods to visit each record (or enumerate the Recordset), you can use the BOF and EOF properties to determine if you've moved beyond the beginning or end of the Recordset. Before using any functionality of a Recordset object, you must call the Supports method on the object to verify that the functionality is supported or available. You must not use the functionality when the Supports method returns false. For example, you can use the MovePrevious method only if Recordset.Supports(adMovePrevious) returns True. Otherwise, you will get an error, because the Recordset object might have been closed and the functionality rendered unavailable on the instance. If a feature you are interested in is not supported, Supports will return false as well. In this case, you should avoid calling the corresponding property or method on the Recordset object. Recordset objects can support two types of updating: immediate and batched. In immediate updating, all changes to data are written immediately to the underlying data source once you call the Update method. You can also pass arrays of values as parameters with the AddNew and Update methods and simultaneously update several fields in a record. If a provider supports batch updating, you can have the provider cache changes to more than one record and then transmit them in a single call to the database with the UpdateBatch method. This applies to changes made with the AddNew, Update, and Delete methods. After you call the UpdateBatch method, you can use the Status property to check for any data conflicts in order to resolve them. Note To execute a query without using a Command object, pass a query string to the Open method of a Recordset object. However, a Command object is required when you want to persist the command text and re-execute it, or use query parameters. The Mode property governs access permissions. The Fields collection is the default member of the Recordset object. As a result, the following two code statements are equivalent. Debug.Print objRs.Fields.Item(0) ' Both statements print Debug.Print objRs(0) ' the Value of Item(0). When a Recordset object is passed across processes, only the rowset values are marshalled, and the properties of the Recordset object are ignored. During unmarshalling, the rowset is unpacked into a newly created Recordset object, which also sets its properties to the default values. The Recordset object is safe for scripting. This section contains the following topic. See Also Connection Object (ADO) Fields Collection (ADO) Properties Collection (ADO) Appendix A: Providers
https://docs.microsoft.com/en-us/sql/ado/reference/ado-api/recordset-object-ado?view=sql-server-2017
2019-04-18T17:22:10
CC-MAIN-2019-18
1555578517745.15
[]
docs.microsoft.com
Contents IT Service Management Previous Topic Next Topic Request approvals Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Request approvals Approving a request in an SM application means that the request is ready for task creation and assignment. When a request is sent to a user with the [SM application]_approver_user role, the approver has several choices. If you select Approval is required for new requests in the applications Configuration screen, a newly created request automatically moves to the Awaiting Approval state. Otherwise, the request moves to the next configured state. Table 1. Request approval states Approval Choice Description Approved The request is approved. Rejected The request is not qualified and it is moved to the canceled state. Also, the following work note is added to the request: The [SM application] request is rejected. More information required The request does not contain enough information. It reverts to the Draft state and the following work note is added to the request: The [SM application] request needs more information for further approval. Duplicate The request is no longer required, because another request has already performed the work. The request is moved to the Cancelled state and the following work note is added to the request: This is a duplicate [SM application] request. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/jakarta-it-service-management/page/product/planning-and-policy/concept/c_RequestApprovals.html
2019-04-18T17:16:42
CC-MAIN-2019-18
1555578517745.15
[]
docs.servicenow.com
User Guide Local Navigation Search This Document I can't send messages Try the following actions: - Verify that your BlackBerry® device is connected to the wireless network. - If the menu item for sending a message doesn't appear, verify that you've added an email address, a PIN, or a phone number for your contact. -. On the Home screen or in a folder, click the Options icon. Click Device > Advanced System Settings > Host Routing Table. Press the key > Register Now. - Generate an encryption key. - Verify that data service is turned on. - Resend the message. Related tasks Related reference Next topic: I'm not receiving messages Previous topic: Troubleshooting: Messages Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/21510/I_cannot_send_messages_60_1055580_11.jsp
2013-12-04T23:54:52
CC-MAIN-2013-48
1386163037851
[]
docs.blackberry.com
Setting up and managing synthetic transaction monitoring After installing the App Visibility server components that are required for monitoring synthetic transactions and Installing a Transaction Execution Adapter Agent ,. - Scripts—From the TEA Agents, scripts run sequences of instructions that simulate user transactions. App Visibility comes with the six prerecorded scripts. You can use an external scripting tool (Silk Performer) to create more scripts. For more details see the Silk Performer documentation . - Execution Plans—An Execution Plan is a wrapper for a script. Through an Execution Plan, you specify the configuration for the script (including custom attributes), locations on which the script runs, run schedules, and blackout periods. Every Execution Plan is associated with an application. You can modify or override some of the settings in the Execution Plan definition. This section presents the following topics: - Viewing an application's synthetic settings - Using scripts to simulate end-user transactions - Preparing Silk Test script execution for synthetic transaction monitoring - Editing an application's synthetic settings - Managing synthetic metric rules - Daylight saving time and blackout periods - Application and Execution Plan status - Setting execution log retention levels - Reclassifying Synthetic Monitor Execution errors and Accuracy errors as Availability errors - Converting from Monitoring Policies to Synthetic Metric Rules Was this page helpful? Yes No Submitting... Thank you
https://docs.bmc.com/docs/applicationmanagement/110/setting-up-and-managing-synthetic-transaction-monitoring-721191623.html
2020-01-17T17:38:02
CC-MAIN-2020-05
1579250589861.0
[]
docs.bmc.com
Namespace: DevExpress.Spreadsheet Assembly: DevExpress.Spreadsheet.v19.2.Core.dll All chart sheets in a workbook are stored in the ChartSheetCollection collection, returned by the Workbook.ChartSheets property. An individual ChartSheet object can be accessed by its name or index in the collection using the ChartSheetCollection.Item property. The ChartSheet object is also a member of the SheetCollection collection, which contains all sheets in a workbook (both chart sheets and worksheets). You can access an individual sheet by its name or index in the collection. Use the ChartSheetCollection.Add or ChartSheetCollection.Insert method to create a new chart sheet. To move an existing chart to a chart sheet, use the chart's ChartObject.MoveToNewChartSheet method. The ChartSheet.Chart property allows you to get access to a chart located on a chart sheet and specify its settings (select chart data, change the chart type and adjust its appearance). The ChartSheet.ActiveView property returns the ChartSheetView object that specifies display and print settings for a chart sheet. Additional printing options are accessible from the ChartSheet.PrintOptions property. For details on how to manage chart sheets in a workbook, refer to the Charts example section.
https://docs.devexpress.com/OfficeFileAPI/DevExpress.Spreadsheet.ChartSheet
2020-01-17T16:26:47
CC-MAIN-2020-05
1579250589861.0
[]
docs.devexpress.com
Security. Authentication - The server or client makes sure that it communicates with an authorized entity. When you enable TLS for a database or CRDB, encryption is enforced on either all communications or only communications between clusters, and RS sends its certificate to clusters and clients for authentication to the database or CRDB. Integrating. Securing. User Login Lockout for Security Compliance To help reduce the risk of a brute force attacks on Redis Enterprise Software (RS), RS includes user login restrictions. You can customize the restrictions to align with the security policy of your organization. Every failed login is shown in the logs. Note - Customers, such as large organizations, that use LDAP to manage external authentication must set these restrictions in the LDAP service. User Login Lockout The parameters for the user login lockout are:
https://docs.redislabs.com/latest/rs/administering/designing-production/security/
2020-01-17T17:03:51
CC-MAIN-2020-05
1579250589861.0
[]
docs.redislabs.com
Disconnecting a Client from a License Server You can stop a client machine from connecting to a license server by deleting its client configuration file. This can only be done manually. - Open an Explorer window. - Browse to C:\flexlm. - Delete the license.dat file. - Open a Terminal. In the terminal, type in the following command: $ sudo rm /usr/local/flexlm/licenses/license.dat
https://docs.toonboom.com/help/activation/activation/disconnect-client-server.html
2020-01-17T15:35:50
CC-MAIN-2020-05
1579250589861.0
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
django-sitetree documentation¶). Sitetree also allows you to define dynamic trees in your code instead of Admin interface. And even more: you can combine those two types of trees in more sophisticated ways. Requirements¶ - Python 3.5+ - Django 1.8+ - Auth Django contrib package - Admin site Django contrib package (optional) Table of Contents¶ - Getting started - SiteTree template tags - Internationalization - Shipping sitetrees with your apps - Management commands - Notes on built-in templates - Advanced SiteTree tags - Tree handler customization - Overriding SiteTree Admin representation - SiteTree Forms and Fields - SiteTree Models - Performance notes - Thirdparty addons - Thirdparty applications support See also¶ If the application is not what you want for site navigation, you might be interested in considering the other choices —
https://django-sitetree.readthedocs.io/en/latest/
2020-01-17T17:18:56
CC-MAIN-2020-05
1579250589861.0
[]
django-sitetree.readthedocs.io
Journey Manager (JM) Previously known as Transact Manager (TM). | System Manager / DevOps | v5.1 & Higher This feature is related to v5.1 and higher. Journey Manager comes with services that are software modules providing specific business-related functionality on demand. You can use services to implement the typical client onboarding solutions, such as: The example of a customer acquisition, onboarding and origination process using Manager's services is shown below: Manager defines several categories to distinguish between services: A service category can be one of the following: A service has a service type defining functionality it implements, for example, virus scanning. There can be multiple services per service type. However, only one of them is set as the default service for each type. Services can be defined for an organization, so they are not visible to other organizations and their forms. Services can have various parameters, which you can customize to control their run-time behavior. Some services require a service connection, which defines a connection to an external service, for example, the AWS SQS service connection. You can migrate existing services from one environment to another to improve your development process. You can easily configure services to be used by a form. However, when building a TM form version, you have the option of enabling or disabling Transact Functions, which limits the range of services available to the form. The comparison is illustrated below. Next, learn how to view all
https://docs.avoka.com/Services/ServicesOverview.htm
2020-01-17T16:35:39
CC-MAIN-2020-05
1579250589861.0
[]
docs.avoka.com
We are the leading provider of data and analytics in Europe. Our APIs allow you to easily access and integrate our data and insight into your own systems. The Bisnode Estonia API is organized around REST. You can use this API to access all our API endpoints listed below. All requests should be made over SSL. All request and response bodies, including errors, are encoded in JSON. Most requests to Bisnode Estonia API server require 'access token'(API key) to authenticate requests. To get 'access token'(API key) please contact us: 🖂 [email protected] 🕿 +372 6414 902 We don't allow multiple identical synchronous (parallel) requests, otherwise you will get "Parallel request blocked" 423 error. For example, you can't send two searchCompany requests at one time, you will have to wait for the first searchCompany request to complete. We offer customized API request based on data you need. Please contact us and let’s discuss how we can make the best solution for you: 🖂 [email protected] or 🕿 +372 6414 902 Our API returns standard HTTP success or error status codes. For errors, we will also include extra information about what went wrong encoded in the response as JSON. The various HTTP status codes we might return are listed below. Example error response. 400: Bad Request{"error": "error","code": 400,"message": "Only EST and LVA countries are supported in this request"}
https://docs.bisnode.ee/
2020-01-17T17:27:16
CC-MAIN-2020-05
1579250589861.0
[]
docs.bisnode.ee
Installation Setup ExpressionEngine® 3.x - Unzip the download. - Upload the system/user/addons/ce_img/ folder to the /system/user/addons/ directory of your ExpressionEngine site. - In the ExpressionEngine® control panel, enable the add-on by clicking on the Developer Tools dropdown menu icon -> Add-On Manager. Locate the "Third party Add-Ons" section, and then click the "Install" button for the "CE Image" table row. - Create the directory /images/made and make sure it has permissions set to 0775 (this is the default, but you may be able to get away with a lower permission setting depending on your server). You can change the cache directory globally and/or override it anytime via the cache_dir parameter). If you cache to other directories, you will need to ensure that they have write permissions. - Create the directory /images/remote and make sure it has permissions set to 0775 (this is the default, but you can change this globally and/or override it anytime via the remote_dir parameter). If you download your remote images to other directories, you will need to make sure that they have write permission as well. For Nexcess hosting: The PHP document root does not reflect the actual server path. You can add the following to your EE /system/expressionengine/config/config.php file to correct the issue: $config['ce_image_document_root'] = '/chroot' . $_SERVER['DOCUMENT_ROOT']; Optional Components AWS Integration - If you would like to integrate CE Image with Amazon S3, simply follow these instructions: - Upload the system/user/addons/ce_img_aws/ folder to the * /system/user/addons/ directory. - In the ExpressionEngine® control panel, enable the add-on by clicking on the Developer Tools dropdown menu icon -> Add-On Manager. Locate the "Third party Add-Ons" section, and then click the "Install" button for the "CE Image - AWS" table row. - Add your AWS settings to your config. See Advanced Configuration for instructions on setting up your AWS configuration settings. Fonts - By default, only the default font (heros-bold.ttf) is included in the /system/expressionengine/third_party/ce_img/fonts directory. You can download 32 additional open source .ttf fonts for use in your site from here if you wish. These optional fonts can be used with the text= parameter for text watermarking. You can, of course, use other open source or commercial .ttf fonts (if their EULA permits) if you desire. MSM Setup If you will be sharing images across your Multiple Site Manager (MSM) domains, and will be calling them by URL, you will want to replace the domain part of the URL with its corresponding server path. This is desirable because it will prevent the plugin from unnecessarily downloading the source images from the other domains to your “remote cache” directory. It’s simply a find and replace setting: '^' => '/var/vhosts/example1.com/httpdocs/', '^' => '/var/vhosts/example1.com/httpdocs/example2/', '^' => '/var/vhosts/example3.com/httpdocs/example3/', '^' => '/var/hosts/example4.com/httpdocs/' ); The little ^ just means “starts with” in RegEx lingo, so in the example above, the items on the domains on the left are replaced with their corresponding server paths. You can learn more about this setting, in the Advanced Configuration and MSM Configuration Overrides sections below.
https://docs.causingeffect.com/expressionengine/ce-image/user-guide/installation.html
2020-01-17T15:25:54
CC-MAIN-2020-05
1579250589861.0
[]
docs.causingeffect.com
Overview This section covers the creation pipeline for moving your vegetation assets into CRYENGINE. It has been broken down into different sections, depending on the type of vegetation you want to create. There will be a dedicated category for detailing that pipeline. For more information on using the vegetation tool, please refer HERE. This section deals with the asset creation pipeline. The major categories are: - Grass
https://docs.cryengine.com/pages/viewpage.action?pageId=25537260
2020-01-17T15:25:48
CC-MAIN-2020-05
1579250589861.0
[]
docs.cryengine.com
Creating a printer friendly version of your MCMS postings. In Visual Studio.NET, open your MCMS application. Create a new User control and name it PrintFriendly.ascx. Drag a HyperLink control from the Web Forms toolbox. Set the Text property of the HyperLink control to "Print Friendly Version". Set the ID property of the HyperLink Control to "lnkPrintFriendly". Switch to your code view and add the following using directives using Microsoft.ContentManagement.Publishing; using Microsoft.ContentManagement.Common; In the Page_Load Event, add the following code: Posting curPosting = CmsHttpContext.Current.Posting; if(CmsHttpContext.Current.Mode == PublishingMode.Published) { lnkPrintFriendly.Visible = true; lnkPrintFriendly.NavigateUrl = "PrintFriendly.aspx?" + curPosting.Guid; lnkPrintFriendly.Target = "_Blank"; } else { lnkPrintFriendly.Visible = false; } Click Save.. In Visual Studio.NET, open your MCMS application. Create a new web form and name it PrintFriendly.aspx. Create a table that is 650 pixels wide and centered on the page. Drag a Label Control inside the table and name it "lblPostingDisplayName". This control will render the posting's Display Name property. Drag a Literal Control inside the table and name it "litPostingContent". This control will render the content of the placeholders. Your HTML Code should look something like this: <%@ Page language="c#" Codebehind="PrintFriendly.aspx.cs" AutoEventWireup="false" Inherits="CMSPrintFriendly.PrintFriendly" %> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> <title>PrintFriendly</title> <meta name="GENERATOR" Content="Microsoft Visual Studio .NET 7.1"> <meta name="CODE_LANGUAGE" Content="C#"> <meta name="vs_defaultClientScript" content="JavaScript"> <meta name="vs_targetSchema" content=""> </HEAD> <body> <form id="frmPrintFriendly" method="post" runat="server"> <P> <TABLE align="center" id="tblContent" cellSpacing="1" cellPadding="1" width="650" border="0"> <TR> <TD> <P> <asp:Label</asp:Label></P> <P><FONT face="Verdana" size="2"> <asp:Literal</asp:Literal></FONT></P> </TD> </TR> </TABLE> </P> </form> </body> </HTML> Switch to your code view and add the following using directives: using Microsoft.ContentManagement.Publishing; using Microsoft.ContentManagement.Common; In the Page_Load Event, add the following code: try { // Retrieve the Posting Display Name string strGuid = Request.QueryString[0].ToString(); Posting curPosting = (Posting)CmsHttpContext.Current.Searches.GetByGuid(strGuid); lblPostingDisplayName.Text = curPosting.DisplayName; // Retrieve placeholder data for every placeholder (seperated with HTML breaks). PlaceholderCollection colPlaceholders = curPosting.Placeholders; foreach(Placeholder pH in colPlaceholders) { litPostingContent.Text = pH.Datasource.RawContent.ToString(); litPostingContent.Text += "<br><br>"; } } catch { // Generate a generic error message if it fails litPostingContent.Text = "Error: There was a problem obtaining the content for this page."; } Rebuild your solution. Implementing the Control The last thing to do will be to implement the control. - Open an existing MCMS template for your MCMS application. - Drag the PrintFriendly User Control on to your template file. - Save this template. At this point, you can now go view a posting (or create a new one) and click on the link which will generate a printer friendly version in a new browser window. This posting is provided "AS IS" with no warranties, and confers no rights.
https://docs.microsoft.com/en-us/archive/blogs/luke/creating-a-printer-friendly-version-of-your-mcms-postings-2
2020-01-17T17:47:39
CC-MAIN-2020-05
1579250589861.0
[]
docs.microsoft.com
A beginner's guide to Pexip Infinity If you're new to video conferencing, or just to Pexip Infinity, this section covers the basics of what you need to know about the Pexip Infinity solution. First, a bit of history How it used to be In the past, the use of video conferencing within businesses was usually restricted to a special AV unit installed in a dedicated meeting room, which had to be booked in advance. Companies had to install special servers called Multipoint Control Units (MCUs) to run their videoconferencing, and because these units were big and expensive, they would be installed in just a few central locations. Video also used up a lot of expensive bandwidth. This all meant that the use of videoconferencing was restricted. A new way of working Now that the use of video has become more pervasive at home and in business, and computing resources are faster and cheaper, users have come to expect instant access to video calls from their desktop. The Pexip Infinity solution enables organizations to provide universal access to videoconferencing. It replaces the old dedicated hardware MCUs with software that can be installed on standard servers and run as virtual machines, and its distributed architecture is designed to allow as many of these virtual machines as required to be spread around a range of locations, providing videoconferencing resources where and when it is required, reducing bandwidth usage across the organization. What's wrong with using what I have now? There are plenty of free video calling and videoconferencing solutions out there, and your organization might already be using some of them. But these systems aren't secure, and they are limited in the number of people who can connect into a single meeting. They also don't support connections from others not using the same solution, so you often can't connect with others outside your organization, or who want to call using a device such as a telephone. What to look for in a modern videoconferencing solution Lets any device talk to any other Most organizations have a mix of endpoints that are used to make video calls. Meeting participants might want to call from a dedicated meeting room equipped with a big-screen video endpoint; a desktop client such as Skype for Business; a web browser; or even a telephone. In the past, these systems weren't able to connect to each other. But Pexip Infinity can act as an interpreter for all these systems, allowing participants to use whatever device they prefer to call in to a meeting. We even offer our own desktop, mobile and web-based clients for those users who don't have access to traditional video devices. Virtual meeting rooms for everyone Pexip Infinity Virtual Meeting Rooms (VMRs) are always available and ready to be used. VMRs that aren't currently being used don't take up any resources, so you can create one for every person in your organization. Whenever someone wants to hold a meeting over video, they can just invite the participants into their VMR. Making direct calls to other people or other meeting platforms Pexip Infinity doesn't just provide VMRs – the Pexip Distributed Gateway feature also allows users to make direct calls to each other. This is particularly significant if the two people on the call are using different video devices or solutions that would not otherwise be able to connect to each other. It also lets you join meetings that are running on other systems such as Microsoft Teams or Google Hangouts Meet. Registering your device so it can be called If you have access to a video endpoint, you can usually use it to make outbound calls. However, if you want to be able to receive calls, you need to register your device and the address that can be used to find it, so that others can make calls to it. There are special systems that can handle registrations within an organization, but Pexip Infinity has this functionality built in. The advantages of software Latest features Because Pexip Infinity is simply software, you have immediate access to the latest releases and all the new features – for free. Just download the software and install it on your servers – it really is that simple. Add and remove resources as required As long as you have access to computing resources – either on-premises servers or cloud-based solutions such as AWS or Azure – you can create additional instances of Pexip Infinity to provide any additional capacity, as and when required, and in a matter of minutes. This can even happen automatically. Next steps - Choosing a deployment environment explains your options for deploying Pexip Infinity on your own servers or on a cloud service. - Our installation guides then provide full information about how to obtain and install the Pexip Infinity software on your chosen platform.
https://docs.pexip.com/admin/beginners_guide.htm
2020-01-17T17:42:55
CC-MAIN-2020-05
1579250589861.0
[]
docs.pexip.com
Attachments How to Increase Maximum Upload File Size? Directive upload_max_filesize in php.ini In the configuration file php.ini with upload_max_filesize directive set the maximum size of uploaded files to the server. If you have access to the php.ini file you can change this value: upload_max_filesize = 20M Note that PHP also has a maximum size of POST requests using the directive post_max_size which must be greater than or equal to the maximum size of uploads files: post_max_size = 20M Depending on the server configuration, the new values can take effect immediately after making changes, or have to restart the web server. Modify .htaccess If you do not have access to the PHP configuration file, you can try to set the values using the file .htaccess to add the following values: php_value upload_max_filesize 20M php_value post_max_size 20M Changes to files .htaccess usually take effect immediately after saving. Use The ini_set() PHP Function Directives upload_max_filesize and post_max_size can be changed by using the PHP function ini_set() in /config/server.php if allowed by the server configuration: ini_set( 'upload_max_size' , '20M' ); ini_set( 'post_max_size', '20M'); Other directives which may affect the upload of large files memory_limit, max_execution_time, max_input_time
https://docs.rukovoditel.net/index.php?p=10
2020-01-17T17:34:38
CC-MAIN-2020-05
1579250589861.0
[]
docs.rukovoditel.net
defines the behavior of a tooltip when the cursor is out of the target widget or HTML area By default, the tooltip disappears when the cursor is no longer over the target view or HTML area. This method is not for direct calls. You can redefine it to change the default logic.
https://docs.webix.com/api__link__ui.treemap_$tooltipout_other.html
2020-01-17T17:21:49
CC-MAIN-2020-05
1579250589861.0
[]
docs.webix.com
sub prompt Documentation for sub prompt assembled from the following types: language documentation Independent routines From Independent routines (Independent routines) sub prompt multi sub prompt()multi sub prompt() Prints $msg to $*OUT handle if $msg was provided, then gets a line of input from $*IN handle. By default, this is equivalent to printing $msg to STDOUT, reading a line from STDIN, removing the trailing new line, and returning the resultant string. As of Rakudo 2018.08, prompt will create allomorphs for numeric values, equivalent to calling val prompt. my = prompt "What's your name? ";say "Hi, $name! Nice to meet you!";my = prompt("Say your age (number)");my Int = ;my Str = ; In the code above, $age will be duck-typed to the allomorph IntStr if it's entered correctly as a number.
http://docs.p6c.org/routine/prompt
2020-01-17T16:35:47
CC-MAIN-2020-05
1579250589861.0
[]
docs.p6c.org
Review group and account balances in the financial statements This feature is only available with products on the CaseWareCloud SE platform. Available SE products include OnPoint PCR, PBC Requests and CaseWare ReviewCompTax. You can review group and account balances in the financial statements document to understand how the numbers are calculated. Information about rounding and balance discrepancies is marked in the tables to help you produce reliable, accurate and high-quality financial statements. Rounding) View rounding details in financial statements To view rounding details in a table row, select the asterisk (*) next to the row. A popup displays showing: - The total rounding differences for the current year and prior year. - The account where rounding differences apply for the current year and prior year. In this example, a total rounding difference of $3 is added to the balance of 1.2.120.500 Property, Plant and equipment - Accumulated depreciation and impairment for the current year. For the prior year, a total rounding difference of $2 is added to the balance of 1.1.000.050 Cash. Change the account where rounding differences apply You can change the account where rounding differences apply to have them added to a financial statements line item that’s frequently used in your market, for instance. To change the account where rounding difference are applied: Select the row with an asterisk (*). Select the account name at Current year. Select the desired account from the list. Select the account name at Prior year. Select the desired account from the list. Rounding differences for the current and prior years are now applied to the balances of the selected accounts respectively. If you select the balance for any of the accounts, the Account Details popup displays a highlighted line showing the rounding details. The Rounding Details popup opens. Tip: The asterisk (*) denotes that rounding differences apply to an account. If the row is a group, then it denotes that rounding differences apply to an account in this group. A list showing the trial balance groups and accounts displays. View balance discrepancies in financial statements Note: This feature is only available if it’s set up in your product. To help you reconcile account balances, table rows can be compared to each other in the financial statements. For example, to assist accountants in reconciling statement line items with totals in the notes section. Upon comparison, a stop sign icon ( ) is available at each row where balance discrepancies have been identified. Note that you cannot choose the rows that the application compares. They are predefined in your product. To view balance discrepancies, select the stop sign icon ( ). A popup displays showing: - The rows in other tables within the document that have been compared to the current row. - Balances that agree with the current row. - Balances that don’t agree with the current row. Tip: You can also select the View Row icon ( ) to go to the specific row of each listed account and view its data. Override balances in financial statements As you review the balances in the financial statements document, you might need to override trial balance values in the tables. For example, to modify a certain balance in the financial statements to reflect the exact balance that your clients had in their prior year financials. To override an account balance: Select the balance that you want to change. Enter the new amount. A dashed border displays to mark the cell as Balance Overridden. To override a group total: Select the group total that you want to change. Enter the new amount. If there is one account in the group, the difference automatically applies to this account. If there is more than one account in the group, a dialog displays showing the difference (between the new amount and the old amount). It also prompts you to select the account where you want the difference to apply. Note that if you select a cell with overridden balance, the Account Details popup displays a highlighted line showing the original balance for that cell (the trial balance amount). Tip: To revert to the original amount, select Revert ( ). In this article Recently viewed Stay Connected Subscribe to receive updates on the latest articles and news for CaseWare products. Your download will start immediately after you subscribe.No thanks, I just want the file.
https://docs.caseware.com/2019/WebApps/29/en/Engagements/Accounts-and-Analysis/Review-group-and-account-balances-in-the-financial-statements.htm
2020-01-17T15:39:50
CC-MAIN-2020-05
1579250589861.0
[array(['/documentation_files/2019/webapps/29/Content/en/Resources//CaseWare_Logos/casewarelogo.png', None], dtype=object) array(['/img/yt_icon_rgb.png', None], dtype=object)]
docs.caseware.com
Xamarin.Mac Extension Support In Xamarin.Mac 2.10 support was added for multiple macOS extension points: - Finder - Today Limitations and Known Issues The following are the limitations and know issues that can occur when developing extensions in Xamarin.Mac: - There is currently no debugging support in Visual Studio for Mac. All debugging will need to be done via NSLog and the Console. See the tips section below for details. - Extensions must be contained in a host application, which when run one time with register with the system. They must then be enabled in the Extension section of System Preferences. - Some extension crashes may destabilize the host application and cause strange behavior. In particular, Finder and the Today section of the Notification Center may become “jammed” and become unresponsive. This has been experienced in extension projects in Xcode as well, and currently appears unrelated to Xamarin.Mac. Often this can be seen in the system log (via Console, see Tips for details) printing repeated error messages. Restarting macOS appears to fix this. Tips The following tips can be helpful when working with extensions in Xamarin.Mac: As Xamarin.Mac currently does not support debugging extensions, the debugging experience will primarily depend on execution and printflike statements. However, extensions run in a sandbox process, thus Console.WriteLinewill not act as it does in other Xamarin.Mac applications. Invoking NSLogdirectly will output debugging messages to the System Log. Any uncaught exceptions will crash the extension process, providing only a small amount of useful information in the System Log. Wrapping troublesome code in a try/catch(Exception) block that NSLog’s before re-throwing may be useful. The System Log can be accessed from the Console app under Applications > Utilities: As noted above, running the extension host application will register it with the system. Deleting the application bundle with unregister it. If “stray” versions of an app's extensions are registered, use the following command to locate them (so they can be deleted): plugin kit -mv Walkthrough and Sample App Since the developer will create and work with Xamarin.Mac extensions in the same way as Xamarin.iOS extensions, please refer to our Introduction to Extensions documentation for more details. An example Xamarin.Mac project containing small, working samples of each extension type can be found here. Summary This article has taken a quick look at working with extensions in a Xamarin.Mac version 2.10 (and greater) app. Related Links Feedback
https://docs.microsoft.com/en-us/xamarin/mac/platform/extensions
2020-01-17T16:20:58
CC-MAIN-2020-05
1579250589861.0
[]
docs.microsoft.com
Maintenance Mode This mode is used to disconnect users from the app. It is recommended to enable this mode when updating the app or changing the app structure. If maintenance mode is enabled, only administrators can log on. Other users who have been authorized will automatically logoff from the application. The login page displays a message that you can customize. Also, to test changes in the app, in addition to administrators, you can allow certain users to log in.
https://docs.rukovoditel.net/index.php?p=11
2020-01-17T17:38:29
CC-MAIN-2020-05
1579250589861.0
[array(['img/1562326612_mmode.png', None], dtype=object)]
docs.rukovoditel.net
Writing reliable scripts Here are some tips for creating reliable input scripts: Environment variables Clear environment variables that can affect your script's operation. One environment variable that is likely to cause problems is the library path. The library path is most commonly known as LD_LIBRARY_PATH on Linux, Solaris, and FreeBSD. It is DYLD_LIBRARY_PATH on OS X, and LIBPATH on AIX. If you are running external python software or using other python interpreters, consider clearing PYTHONPATH. - Caution: Changing PYTHONPATH may affect other installations of python. On Windows platforms, the SPLUNK_HOME the Splunk platform's version of Python. In this case, you can copy the libraries to the same directory as the scripted input. To run a script using the version of Python available from Splunk Enterprise: $SPLUNK_HOME/bin/splunk cmd python <your_script>.py File paths in Python Be careful when specifying platform-specific paths and relative paths. Platform-specific paths When writing scripts in Python, avoid hard coding platform-specific file paths. Instead specify file paths that can be interpreted correctly on Windows, UNIX, and Mac platforms. For example, the following Python code launches try.py, which is in the bin directory of your app, and has been made cross-compatible with Python 2 and Python 3 using python-future.: from __future__ import print_function import os import subprocess # Edit directory names here if appropriate if os.name == 'nt': ## Full path to your Splunk installation splunk_home = 'C:\Program Files\Splunk' ## Full path to python executable python_bin = 'C:\Program Files (x86)\Python-2.7-32bit\python.exe' else: ## Full path to your Splunk installation # For some reason: #splunk_home = '/appl/opt/splunk_fwd/' # For a sensible OS: splunk_home = '/opt/splunk' ## Full path to python executable # For Mac OS X: #python_bin = '/Library/Frameworks/Python.framework/Versions/2.7/bin/python' # For a sensible filesystem: python_bin = '/usr/bin/python' try_script = os.path.join(splunk_home, 'etc', 'apps', 'your_app', 'bin', 'try.py') print(subprocess.Popen([python_bin, try_script], stdout=subprocess.PIPE).communicate()[0]) Relative paths Avoid using relative paths in scripts. Python scripts do not use the current directory when resolving relative paths. For example, on *nix platforms, relative paths are set relative to the root directory ( /). The following example shows how to locate the extract.conf file, which is in the same directory as the script: import os import os.path script_dirpath = os.path.dirname(os.path.join(os.getcwd(), __file__)) config_filepath = os.path.join(script_dirpath, 'extract.conf') Format script output Format the output of a script so Splunk software can easily parse the data. Also, consider formatting data so it is more human-readable as well. Use the Common Information Model. Timestamp formats Time stamp the beginning of an event. There are several options for timestamp formats: RFC-822, RFC-3339 These are standard timestamp formats for email headers and internet protocols. These formats provide an offset from GMT, and thus are unambiguous and more human-readable. RFC-822 and RFC-3339 formats can be handled with %z in a TIME_FORMAT setting. - RFC-822 Tue, 15 Feb 2011 14:11:01 -0800 - RFC-3339 2011-02-15 14:11:01-08:00 UTC UTC formatting may not be as human-readable as some of the other options. If the timestamp is epoch time, no configuration is necessary. Otherwise, requires a configuration in props.conf that declares the input as TZ=UTC. - UTC 2011-02-15T14:11:01-05:00 2011-02-15T14:11:01Z - UTC converted to epoch time 1297738860 exist in the data. - Field names are case sensitive. For example the field names "message" and "Message" represent different fields. Be consistent when naming fields. Write a setup screen to configure scripted inputs If you are packaging an app or add-on for distribution, consider creating a setup screen that allows users to interactively provide configuration settings for access to local scripted input resources. Include an input stanza for your script so setup.xml doesn't require a custom endpoint. See Create a setup page for a Splunk app on the Splunk developer portal. Refer to the *Nix and Windows apps for examples on using setup.xml pages to create a setup screen. These apps are available for download from Splunkbase. Save state across invocations of the script Scripts often need to checkpoint their work so subsequent invocations can pick up from where they left off. For example, save the last ID read from a database, mark the line and column read from a text file, or otherwise note the last input read. (See Example script that polls a database.) You can check point either the index or the script. When check pointing data, keep in mind that the following things are not tied together as a transaction: - Writing out checkpoint files - Fully writing data into the pipe between the script and splunkd - splunkd completely writing out the data into the index Thus, in the case of hard crashes, it's hard to know if the data the script has acquired has been properly indexed. Here are some of the choices you have: Search Splunk index One strategy is to have the scripted input search in the Splunk index to find the last relevant event. This is reasonable in an infrequently-launched script, such as one that is launched every 5 or 10 minutes, or at launch time for a script which launches once and stays running indefinitely. Maintain independent check point Because there is some delay between data being fed to the Splunk platform and the data becoming searchable, a frequently run scripted input must maintain its own checkpoint independent of the index. Choose a scenario If the script always believes its own checkpoint, data may not be indexed on splunkd or system crash. If the index search is believed, some data may be indexed multiple times on splunkd or system crash. You need to choose which scenario you best fits your needs. Accessing secured services Use proper security measures for scripts that need credentials to access secured resources. Here are a few suggestions on how to provide secure access. However, no method is foolproof, so think carefully about your use case and design secure access appropriately: - Restrict which users can access the app or add-on on disk. - Create and use credentials specific to the script, with the minimum permissions required to access the data. - Avoid putting literal passwords in scripts or passing the password as a command line argument, making it visible to all local processes with operating system access. - Use Splunk Enterprise to encrypt passwords. You can create an app set up page that allows users to enter passwords. See the setup page example with user credentials on the Splunk developer portal. The user can enter a password in plain text, which is stored in the credential stanza in apps.conf. Alternatively, you can specify a python script to securely provide access. - Caution: Splunk Enterprise assembles a secret using locally available random seeds to encrypt passwords stored in configuration files. This method provides modest security against disclosure of passwords from admins with local disk read capability. However, it is not an adequate protection for privileged accounts. Concurrency issues for scripted inputs Be careful scheduling two copies of a script running at any given time. Splunk Enterprise detects if another instance of the script is running, and does not launch a new instance if this is the case. For example, if you have a script scheduled to execute every 60 seconds, and a particular invocation takes 140 seconds, Splunk Enterprise detects this and does not launch a new instance until after the long-running instance completes. At times you may want to run multiple copies of a script, for example to poll independent databases. For these cases, design your scripts so they can handle multiple servers. Also, design your script so that multiple copies can exist (for example, use two app directories for the script). Alternatively, you could have separate scripts using the same source type. Troubleshooting scheduled scripts Splunk Enterprise logs exceptions thrown by scheduled scripts to the splunkd.log file, located here: $SPLUNK_HOME/var/log/splunk/splunkd.log Check splunkd.log first if expected events do not appear in the expected index after scheduling the scripted input. Shutdown and restart issues Keep these shutdown and restart issues in mind when designing your scripts: Output at least one event at a time This makes it easier to avoid reading a partial event if the script is terminated or crashes. Splunk Enterprise expects events to complete in a timely manner, and has built-in time-outs to prevent truncated or incomplete events. Configure the pipe fd as line-buffered, or write() full events at once. Be sure the events are flushed: line buffered/unbuffered/fflush() Output relatively small batches of events Fetching thousands of event over a few minutes and then outputting them all at once increases the risk of losing data due to a restart. Additionally, outputting small batches of events means your data is searchable sooner and improves script What is with all of the dead links? Hi @Bryan scca, Some of our content on building apps migrated to the Splunk developer portal recently and links broke. I've reviewed and updated the links so that all should be working now. Thanks for bringing this to our attention!
https://docs.splunk.com/Documentation/Splunk/6.3.14/AdvancedDev/ScriptWriting
2020-01-17T16:47:47
CC-MAIN-2020-05
1579250589861.0
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Not planning to do this module? You can skip to the next one. However, even if you plan to monitor only your vCenter Server, there are some important concepts covered in this module that are as applicable to your instances as they are to physical servers. Even if you do not perform the steps in this module, consider following along to learn more about agent- and WMI-based data collection. This module consists of the following exercises: Install an Agent on a Server watch out! Although there are Uptime Infrastructure Monitor agents for various platforms including Linux, UNIX, and Windows, in this module, you can install an agent on a Linux server. Although agents are installed, they require minimal configuration and management, and have a small resource footprint. They are a low-cost way to get - you have identified on which test Linux server you want to install the agent - an RPM utility is installed (and is in path) on the test Linux server - xinetd is installed on the test Linux server - you have downloaded the agent for this platform ( uptimeagent-6.0.0-linux-x86_64.rpm), and transferred it to the server - Log into the system as user root. - Run the following command: rpm -i uptimeagent-6.0.0-linux-x86_64.rpm The agent install process performs various steps, such as restarting xinetd, and verifying dependencies such as netstatand vmstatexist. - Confirm via the command-line feedback that the installation is complete. Pro Tip Although this procedure was very hands-on, naturally for an actual, large-scale deployment, you can consider a deployment solution such as Puppet, or BigFix for Windows agent installations. Configure Global Agent Collection Now that you have installed an agent on a server, you could add it to Uptime Infrastructure Monitor by using its host name. However, in a more realistic deployment, you would likely be installing agents on many servers. In this scenario, using Auto Discovery can expedite the process if you tell Uptime Infrastructure Monitor how to find all servers with an agent installed. - In the Uptime Infrastructure Monitor Web interface, begin by clicking Config, then click Global Credentials Settings in the left pane. On this page are configuration fields that let you define properties of different metric collection methods, allowing you to automatically discover large groups that share the same properties. - In the Uptime Agent Global Configuration section, click Edit Configuration on the far right. The port used to communicate with is 9998. By default, SSL is not enabled. - Click Save. Validation Step: Test the global setting by entering the hostname of the Linux server you installed the agent on during the previous exercise, and then clicking Test Configuration. This Linux server is now ready to be added to Uptime Infrastructure Monitor as an agent-based Element. Before doing this, let's take a look at how enhanced metrics can similarly be collected for Windows-based servers. Configure Global WMI Collection As an alternative to the Windows Uptime Infrastructure Monitor agent, Windows Management Instrumentation can provide deeper metrics for Uptime Infrastructure Monitor that is similar with agent-based data collection. The advantage is that it makes use of your existing infrastructure, negating the need for agent deployment. All you need to do is provide the WMI administrator information to the Uptime Infrastructure Monitor Monitoring Station, so that it is able to access Windows-based servers. As with global agent settings, the WMI Agentless Global Credentials section of the Global Credentials Settings page lets you input WMI information once at a central point: Configure the settings similar to those shown above: - Windows Domain: The Windows domain in which WMI is implemented. - Username: The name of the account with access to WMI on the Windows domain. - Password: The password for the account with access to WMI on the windows domain. Validation Step: Test the global setting by entering a Windows host that the Monitoring Station can see in the Test Configuration section. You are now ready to find an agent-based Linux server, and WMI Windows server. Add Agent and WMI Servers Using Auto Discovery - Click Infrastructure, then click Auto Discovery in the left pane. - In the Auto Discovery pop-up, confirm that the selection is Discover Servers and Network Devices on your network, and click Next. - In the next step, select Servers with Uptime Agent, and Servers with Windows Management Instrumentation (WMI). In both cases, select the Use [...] Global Configuration option that you have defined in the last two exercises: Enter the subnet servers on the subnet or IP address range are detected, you can make selections to add to your Uptime Infrastructure Monitor inventory. Select the Linux server that's using the Uptime Infrastructure Monitor agent, and select any WMI-managed Windows server. - Scroll to the bottom of the Auto Discovery list, and click Add. As a final step, you receive confirmation that these are now part of your monitored inventory. Click Done. Review Your Current Inventory After adding servers and closing the Auto Discovery window in the previous exercise, the main Uptime Infrastructure Monitor UI window is at the Infrastructure view. Refresh the page (or click Infrastructure) to ensure the latest additions appear immediately. If you followed the Hyper-V or vCenter Server track, your inventory already included the virtual server Element and Infrastructure Groups created over those exercises. In addition, you now see the Linux server and WMI-managed Windows server you added in the previous exercise. Your inventory is nowCenter Server element, and a VM-type Element. Compared to the latter, the Quick Snapshot for an agent- or WMI-based server includes more detail, such as process information, which can be acted upon by Uptime Infrastructure Monitor (for example, Uptime Infrastructure Monitor's action scripts can restart a service as a follow-up remedy to an outage). Because performance metrics are gathered in real time by the Uptime Infrastructure Monitor agent or via WMI, there is nothing yet to display in the Quick Snapshot graphs. After moving through more of this Getting Started Guide, return to this Quick Snapshot to view some data. License Check! Verify how many license spots are free by clicking Config, then clicking License Info in the left pane. The number of used licenses is displayed in the License Information section. In this Getting Started Guide, the next track has you adding network devices. If you plan on following this track, you need to anticipate the number of network devices you plan to add. At minimum, you'll need at least 1. If you have run out of license spots, it's likely you have added a Hyper-V or vCenter Server. The easiest way to free up space is to manually ignore VMs; each VM you ignore opens a license spot for a new Element. Return to the Inventory Detail view for the Hyper-V/vCenter Element (Infrastructure > gear icon > View > Inventory Detail). Select VMs, ESX hosts, or even an entire cluster, then click Add Selected Elements to Ignore. The spots are freed up in your license, which you can verify by clicking Config > License Info. Save Save Save
http://docs.uptimesoftware.com/display/UT/Add+Physical+Servers
2020-01-17T17:22:37
CC-MAIN-2020-05
1579250589861.0
[]
docs.uptimesoftware.com
TOPICS× Create or Update Trait Rules and Segment Rules The create and update worksheets accept a traitRule header that lets you apply multiple rules in a single operation. Follow these instructions to make bulk rule requests. The Bulk Management Tools are not supported by Audience Manager. This tool is provided for convenience and as a courtesy only. For bulk changes, we recommend that you work with the Audience Manager APIs instead. RBAC group permissions assigned in the Audience Manager UI are honored in the Bulk Management Tools. Working with trait rules. Rule builder example. Creating your own rules You can write your own rules outside of Rule Builder. Before you start, be sure to read the documentation that covers things like operators, expression, and required variables. We recommend you review the following:
https://docs.adobe.com/content/help/en/audience-manager/user-guide/reference/bulk-management-tools/bulk-rules.html
2020-01-17T15:57:58
CC-MAIN-2020-05
1579250589861.0
[array(['/content/dam/help/audience-manager.en/help/using/reference/bulk-management-tools/assets/visualrule.png', None], dtype=object) array(['/content/dam/help/audience-manager.en/help/using/reference/bulk-management-tools/assets/coderule.png', None], dtype=object) array(['/content/dam/help/audience-manager.en/help/using/reference/bulk-management-tools/assets/segmentrule.png', None], dtype=object) ]
docs.adobe.com
TransactField AppThis topic is related to TransactField App. TransactField App works with the Temenos Journey Platform and enables mobile workers to do business whenever, and wherever, they require. It offers robust mobile capabilities and improves mobile workforce productivity by optimizing on-site data entry and making it easy for organizations to deliver quality service and measurable results, as well as speed up billing. Additionally, organizations can link the TransactField App to ERPEnterprise resource planning (ERP) is the integrated management of main business processes, often in real-time and mediated by software and technology. and CRMCustomer relationship management (CRM) is an approach to manage a company's interaction with current and potential customers. It uses data analysis about customers' history with a company to improve business relationships with customers, specifically focusing on customer retention and ultimately driving sales growth. systems to deliver work tickets, inspection jobs, and report forms to a mobile workforce – anytime, anywhere. TransactField App runs on Apple and Android devices, Microsoft Windows desktop and tablet and synchronizes them with mobile data capture applications that are built using the Temenos Journey Platform. Interaction between TransactField App and the Journey platform is illustrated with the diagram below. The TransactField App delivers the following capabilities to mobile workers: The following form transaction features supported by TransactField App and Self Service Portal. This table should help guide business on where to deploy their form transaction applications. The Self Service Portal does not support working offline or working in environments with very limited internet connectivity. Next, learn how to get started with TransactField
https://docs.avoka.com/TransactFieldApp/AppOverview.htm
2020-01-17T16:04:11
CC-MAIN-2020-05
1579250589861.0
[]
docs.avoka.com
Orientation BMC Configuration Management Database (BMC CMDB) helps you source, store, monitor, and manage the data related to configuration items (CIs) like IT hardware and software, printers, servers, air conditioners, and so on, including the relationships among them. This data helps you understand your environment better. Data flow in BMC CMDB From a high level, the following video (11:48) shows you the end-to-end steps that are required to take your data into production with BMC Atrium Core — from a third-party source to your production dataset in BMC Configuration Management Database (BMC CMDB). This video is recorded using the earlier version of BMC Atrium Core but the principles are relevant. Data flow in BMC CMDB Data providers, such as discovery applications, put data into BMC CMDB, where it is partitioned into separate datasets. This data is then brought together into a consolidated production dataset that you use as the single, reliable source of reference for your IT environment. Consuming applications, such as Remedy IT Service Management (Remedy ITSM), BMC Asset Management, BMC Service Level Management, applications represented in Business Service Management, and more, use the data in the production dataset. The BMC CMDB user interface attempts to provide a unified experience to the various workflows and tasks that one needs to perform when monitoring and managing a CMDB. ITIL processes and BMC CMDB ITIL standardizes the processes that IT departments use to manage IT hardware and software, such as Problem Management, Incident Management, and Change Management. These standard processes ensure the availability of the critical IT services that sustain a business, like banking systems, ordering systems, and manufacturing systems. Additionally, ITIL Service Asset and Configuration Management needs reliable data about components in the IT environment and needs to understand the impact to key business services when changes occur in that environment. Central to ITIL processes that involve maintaining and managing information about thousands of pieces of hardware and software is the configuration management database. The data in the BMC CMDB feeds the applications that perform ITIL processes. Basic concepts Discovery applications Discovery applications help identify various systems in the network and obtain information about them that is relevant to the CMDB. Applications such as BMC BladeLogic Client Automation Configuration Discovery provide data to the Integrator. Applications such as BMC Discovery provide data directly to the Import dataset. Integrator The Integrator is an integration engine that enables you to transfer data from external data stores to BMC CMDB classes. Import dataset A dataset is a local grouping of data. An import dataset is data in its unprocessed form and cannot be used for ITIL processes until it is normalized and reconciled. Normalization Normalization is the imposition of standards or regulations as defined in the product catalog. The normalization engine standardizes, corrects, and cleans up the data drawn from various discovery sources. Normalization involves: - Updating the name and version information for products and suites based on the definitions in the product catalog. - Setting impact relationships that define dependencies between CIs. - Setting attributes that are customized for specific use cases. Reconciliation Reconciliation is the process in which data from different discovery sources is checked and corrected to maintain consistency while also making sure there is no duplication of data. Production dataset This reliable data helps you understand the environment and the impact to key business services when changes occur in this environment. Sandbox dataset Any changes that need to be made to the production dataset are first tried out in the sandbox dataset. If the implementation is successful, only then are they moved to the production dataset. The sandbox provides a safety mechanism that prevents unintended changes to your production dataset. Federated data Federated data is data stored outside BMC CMDB, but linked to CIs so that it is accessible through BMC CMDB. The most common types of federated data are related information and detailed attributes. Roles with respect to the product, key responsibilities, and documentation The following diagram explains the various product roles, their key responsibilities, how that relates to the various ITSM components, along with various areas of the documentation to find that information. Resources and related topics Wikipedia topic on ITIL InformIT article: What is a CMDB? BMC Discovery
https://docs.bmc.com/docs/ac1902/orientation-836466469.html
2020-01-17T17:38:55
CC-MAIN-2020-05
1579250589861.0
[]
docs.bmc.com
Open module Popup This action will allow you to manually trigger an open popup action for the specified module(Form/TabsPro).. - Select Module. Select the module that you wish to open in popup (Form or TabsPro based on the action selected: Open Action Form Popup, Open TabsPro Popup). - QueryString Parameters. Optionally you can pass parameters through querystrings that can then be used in the module that is currently being opened in popup; their values can be referenced as tokens with ‘QueryString:’ namespace. Using the javascript API you can open Action Form in popup by calling the next javascript method dnnsf.api.actionForm.openPopupById(‘1234’, {‘param’:’valueofparam’,’param2’:’valueofparam2’},true) First parameter is required and is the module id of the Action Form The second parameter is optional and it is a JS object. After the Action Form init the values can be used by calling the QueryString token (eg. [QueryString:param]). The third parameter is optional and tells Action Form if the module should be reinitialized (refreshed). This can be used when you want to refresh the form so it can use the values from the second parameters. Show Condition and Enable Condition are refreshed as well. Default is false.
https://docs.dnnsharp.com/actions/dnn-sharp/open-module-popup.html
2020-01-17T15:22:28
CC-MAIN-2020-05
1579250589861.0
[array(['http://static.dnnsharp.com/documentation/open_module_popup.png', None], dtype=object) ]
docs.dnnsharp.com
All content with label archetype+article+async+data_grid+demo+ec2+fine_grained+grid+hot_rod+hotrod+infinispan+jboss_cache+listener+pojo_cache+release+test+user_guide+userguide+write_through. Related Labels: podcast, expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, partitioning, query, deadlock, intro, contributor_project, lock_striping, jbossas, nexus, guide, schema, cache, amazon, s3, memcached, jcache, api, xsd, maven, documentation, roadmap, wcm, youtube, write_behind, s, streaming, hibernate, getting, aws, interface, custom_interceptor, clustering, setup, eviction, large_object, gridfs, concurrency, out_of_memory, examples, import, index, events, batch, configuration, hash_function, buddy_replication, loader, xa, pojo, cloud, remoting, mvcc, tutorial, notification, presentation, murmurhash2, xml, read_committed, jbosscache3x, distribution, started, jira, cachestore, cacheloader, resteasy, integration, cluster, development, br, websocket, transaction, interactive, xaresource, build, gatein, searchable, scala, installation, cache_server, client, migration, non-blocking, jpa, filesystem, tx, gui_demo, eventing, student_project, client_server, testng, infinispan_user_guide, murmurhash, standalone, snapshot, webdav, repeatable_read, docs, batching, consistent_hash, store, whitepaper, jta, faq, as5, 2lcache, jsr-107, lucene, jgroups, locking, rest more » ( - archetype, - article, - async, - data_grid, - demo, - ec2, - fine_grained, - grid, - hot_rod, - hotrod, - infinispan, - jboss_cache, - listener, - pojo_cache, - release, - test, - user_guide, - userguide, - write_through ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/archetype+article+async+data_grid+demo+ec2+fine_grained+grid+hot_rod+hotrod+infinispan+jboss_cache+listener+pojo_cache+release+test+user_guide+userguide+write_through
2020-01-17T16:40:45
CC-MAIN-2020-05
1579250589861.0
[]
docs.jboss.org
Improving performance and scalability in .net Just ran across this in an email: Improving .NET Application Performance and Scalability. I've worked with J.D. Meier, who's one of the authors of this guide, and have always been impressed with his knowledge and skill. If you're looking for guidance on how to get more oomph out of your .net code, this is a good guide to refer to.
https://docs.microsoft.com/en-us/archive/blogs/gduthie/improving-performance-and-scalability-in-net
2020-01-17T17:56:11
CC-MAIN-2020-05
1579250589861.0
[]
docs.microsoft.com
AJAX Control Toolkit 10920 Released! I know, I know, we were a whole week late from our original date. What kind of show are we running around here anyway? :) S Seriously there's a lot of big fixes in here - almost 1000 votes worth - including some much-requested Calendar work (thanks Ron). One of the thing the team has been working really hard on is a new testing framework for the Toolkit - one that lets us define tests much more easily so we can broaden our test coverage. The new framework and harness is super cool - but it's a bunch of new code so we're still smoothing out some of the bumps. For several days we thought we were close to getting it perfect, and wanted it to be included with this release. This went on for a few days and it was clear we were spending much more time tweaking the harness than actually addressing Toolkit bugs for the release. We wanted to make sure we got the Toolkit fixes into your hands so yesterday we made the call to detach the harness from the Toolkit and just get the fixes out there. We'll keep hammering away at the harness, it's coming along nicely. David and Kirti have all the dirt about what's available in this release on his blog. Check it out! Download the new Toolkit release here, see release notes here.
https://docs.microsoft.com/en-us/archive/blogs/sburke/ajax-control-toolkit-10920-released
2020-01-17T17:59:52
CC-MAIN-2020-05
1579250589861.0
[]
docs.microsoft.com
Container Platform uses the experimental-qos-reserved parameter as follows: A value of experimental experimental-qos-reserved=memory=50% will allow the Burstable and BestEffort QOS classes to consume half of the memory requested by a higher QoS class. A value of experimental.. To disable swap: $ swapoff -a.
https://docs.openshift.com/container-platform/3.6/admin_guide/overcommit.html
2020-01-17T15:45:08
CC-MAIN-2020-05
1579250589861.0
[]
docs.openshift.com
Creating and configuring reactors A reactor is a class, that much like a projector, listens for incoming events. Unlike projectors however, reactors will not get called when events are replayed. Reactors will only get called when the original event fires. Creating reactors Let’s create a reactor. You can perform this artisan command to create a projector in app\Reactors: php artisan make:reactor BigAmountAddedReactor Registering reactors By default, the package will automatically find and register all reactors found in your application. Alternatively, you can also manually register them in the reactors key of the event-sourcings config file. They can also be added reactor Projectionist::addReactor(BigAmountAddedReactor::class); // you can also add multiple reactors in one go Projectionist::addReactors([ AnotherReactor::class, YetAnotherReactor::class, ]); } } Using reactors This is the contents of a class created by the artisan command mentioned in the section above: namespace App\Reactors; class MyReactor { public function onEventHappened(EventHappended $event) { } }); Mail::to($account->user)->send(new MoreMoneyAddedMailable()); } The order of the parameters giving to an event handling method like onMoneyAdded. We’ll simply pass the uuid to any arguments named $uuid. Manually register\Reactors; use App\Events\MoneyAdded; class BigAmountAddedReactor { /* * Here you can specify which event should trigger which method. */ protected $handlesEvents = [ MoneyAdded::class => 'onMoneyAdded', ]; public function onMoneyAdded(MoneyAdded $event) { // do some work } } This reactor will be created using the container so you may inject any dependency you’d like. In fact, all methods present in $handlesEvent can make use of method injection, so you can resolve any dependencies you need in those methods as well. Any variable in the method signature with the name $event will receive the event you’re listening for. Using default event handling method names In the example above the events are mapped to methods on the reactor using the $handlesEvents property. // in a reactor // ... reactor // ... protected $handlesEvents = [ /* * If this event is passed to the reactor, the `onMoneyAdded` method will be called. */ MoneyAdded::class, ]; Handling a single event You can $handleEvent to the class name of an event. When such an event comes in we’ll call the __invoke method. // in a reactor // ... protected $handleEvent = MoneyAdded::class, public function __invoke(MoneyAdded $event) { } Using a class as an event handler Instead of letting a method on a reactor handle an event you can use a dedicated class. // in a projector // ... protected $handlesEvents = [ /* * If this event is passed to the projector, the `AddMoneyToAccount` class will be called. */ MoneyAdded::class => SendMoneyAddedMail::class, ]; Here’s an example implementation of SendMoneyAddedMail: use App\Events\MoneyAdded; class SendMoneyAddedMail { public function __invoke(MoneyAdded $event) { // do work to send a mail here } }
https://docs.spatie.be/laravel-event-sourcing/v1/using-reactors/creating-and-configuring-reactors/
2020-01-17T15:52:16
CC-MAIN-2020-05
1579250589861.0
[]
docs.spatie.be
How incoming data affects Splunk Enterprise performance This topic discusses how incoming data impacts indexing performance in Splunk Enterprise. A reference Splunk Enterprise indexer can index a significant amount of data in a short period of time - over 20 MB of data per second - or over 1700 GB per day. This is if the server is doing nothing else but consuming data. Performance changes depending on the size and amount of incoming data. Larger events slow down indexing performance. As events increase in size, the indexer uses more system memory to process and index them. If you need more indexing capacity than a single indexer can provide, you must add indexers into the deployment to account for the increased demand.!
https://docs.splunk.com/Documentation/Splunk/6.1.1/Installation/HowincomingdataaffectsSplunkperformance
2020-01-17T17:15:01
CC-MAIN-2020-05
1579250589861.0
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Managing entitlement packages Entitlement packages are groups of requestable offerings that you want to make available to one or more tenant companies. These packages are created from the Entitlement Packages tab in the Service Catalog workspace, as described in Creating entitlement packages. For overview information about the packages, see Entitlement packages overview. When you create a new tenant or import an existing one, you can make the entitlement packages available to the users belonging to the tenant by selecting from the list of entitlement packages. Within a few minutes, the service offering in the packages listed in the Selected Entitlement Packages table is available for the tenant's users to request in the BMC Cloud Lifecycle Management – My Cloud Services Console. For details, see Creating and importing tenants. From the Tenant Management pane, you can manage the entitlement packages and decide what packages to make available for tenants. As a cloud administrator, you can decide to create new, or edit existing packages and make them available to the tenants. To manage an entitlement package - From the BMC Cloud Lifecycle Management – Administration Console, click the vertical Workspaces menu on the left side of the window and select Tenants. - Click Manage Entitlement Packages. You are directed to the Service Catalog workspace and from there you can create a new entitlement package or edit an existing one. For details, see Creating entitlement packages. To select entitlement packages for tenant users This procedure describes how to select which entitlement packages are made available to a tenant's users and how to set the tenant's contract end date. - From the Administration Console, click the vertical Workspaces menu and select Tenants. - In the Tenant Management pane, select a tenant from the list of tenants on the left side of the pane and click Edit Tenant . In the Edit Tenant dialog box, in the Entitlements tab, the entitlement packages already assigned to the tenant are displayed. - From the list of Available Entitlement Packages, select the packages you want to apply to this tenant. - Click Add > to move them to the Selected Entitlement Packages area. You can use the search field to search for the preferred entitlement package. For details about the entitlement packages, see Creating entitlement packages. - Click Save. - To verify whether the entitlement package is selected successfully, click the Activities tab. The Update Tenant Quota activity and its status are displayed. Related topics Entitlement Packages workarea overview Troubleshooting entitlements
https://docs.bmc.com/docs/cloudlifecyclemanagement/45/administering-the-product/tenants/managing-entitlement-packages
2020-01-17T17:33:42
CC-MAIN-2020-05
1579250589861.0
[]
docs.bmc.com
New VR Detection Feature What’s new?What’s new? The system is now able to detect when visitors are accessing content using VR devices or devices in VR mode, and can show stats of this activity in the Statistics tabs. What can this new feature do for me?What can this new feature do for me? As an Advertiser, you can now be aware of when visitors are accessing your ads using VR devices such as Oculus Quest, Samsung Gear VR and others, and target them when creating campaigns. Both Advertisers and Publishers can now monitor this emerging way of accessing content, and bear it in mind when creating campaigns and ad zones. DetailsDetails There is now a VR subtab in both the Advertiser and Publisher Statistics tabs, showing stats for visitors using devices in VR Mode. The Advertiser and Publisher Statistics tabs can be filtered by VR also. Advertisers can now target devices in VR mode in Step 2: Targeting & Advanced of creating a campaign. Note: A VR device will be either a dedicated device such as the Oculus Rift/Quest, or a phone such as the Samsung S6+ placed in a VR headset.
https://docs.exads.com/blog/2019/09/12/vr-detection.html
2020-01-17T16:59:00
CC-MAIN-2020-05
1579250589861.0
[array(['/blog/assets/vr-statistics.png', 'screenshot'], dtype=object)]
docs.exads.com
When users are scrolling through pages we can update the browser url and page title. This allows the current page to be shared or bookmarked. We can achieve this by listening for the page event. The page event contains the url and title of the current page in view. The History API can be used to update the url. ias.on('page', (event) => {// update the titledocument.title = event.title;// update the urllet state = history.state;history.replaceState(state, event.title, event.url);}); View this behaviour in a live demo This feature is still work in progress
https://docs.infiniteajaxscroll.com/advanced/history
2020-01-17T15:49:00
CC-MAIN-2020-05
1579250589861.0
[]
docs.infiniteajaxscroll.com
All content with label aws+buddy_replication+build+cachestore+data_grid+deadlock+distribution+gridfs+infinispan+installation+maven+repeatable_read. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, rehash, replication, transactionmanager, dist, release, partitioning, query, contributor_project, archetype, lock_striping, jbossas, nexus, guide, listener, state_transfer, cache, s3, amazon, grid, memcached, jcache, test, api, xsd, ehcache, documentation, wcm, userguide, write_behind, 缓存, ec2, s, streaming, hibernate, getting, interface, clustering, setup, mongodb, eviction, large_object, out_of_memory, concurrency, examples, jboss_cache, import, index, events, configuration, hash_function, loader, colocation, write_through, cloud, remoting, mvcc, tutorial, notification, murmurhash2, jbosscache3x, read_committed, xml, started, cacheloader, resteasy, hibernate_search, cluster, development, br, async, transaction, interactive, xaresource, gatein, hinting, searchable, demo, scala, client, non-blocking, migration, rebalance, filesystem, jpa, tx, user_guide, gui_demo, eventing, student_project, client_server, testng, infinispan_user_guide, murmurhash, hotrod, snapshot, webdav, docs, consistent_hash, batching, store, jta, faq, 2lcache, jsr-107, lucene, jgroups, locking, rest, hot_rod more » ( - aws, - buddy_replication, - build, - cachestore, - data_grid, - deadlock, - distribution, - gridfs, - infinispan, - installation, - maven, - repeatable_read ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/aws+buddy_replication+build+cachestore+data_grid+deadlock+distribution+gridfs+infinispan+installation+maven+repeatable_read
2020-01-17T15:57:33
CC-MAIN-2020-05
1579250589861.0
[]
docs.jboss.org
Speed¶ CPython, the most commonly used implementation of Python, is slow for CPU bound tasks. PyPy is fast. Using a slightly modified version of David Beazley’s CPU bound test code (added loop for multiple tests), you can see the difference between CPython and PyPy’s processing. # PyPy $ ./pypy -V Python 2.7.1 (7773f8fc4223, Nov 18 2011, 18:47:10) [PyPy 1.7.0 with GCC 4.4.3] $ ./pypy measure2.py 0.0683999061584 0.0483210086823 0.0388588905334 0.0440690517426 0.0695300102234 # CPython $ ./python -V Python 2.7.1 $ ./python measure2.py 1.06774401665 1.45412397385 1.51485204697 1.54693889618 1.60109114647 Context¶ The GIL¶ The GIL (Global Interpreter Lock) is how Python allows multiple threads to operate at the same time. Python’s memory management isn’t entirely thread-safe, so the GIL is required to prevent multiple threads from running the same Python code at once. David Beazley has a great guide on how the GIL operates. He also covers the new GIL in Python 3.2. His results show that maximizing performance in a Python application requires a strong understanding of the GIL, how it affects your specific application, how many cores you have, and where your application bottlenecks are. The GIL¶ Special care must be taken when writing C extensions to make sure you register your threads with the interpreter. C Extensions¶ Cython¶ Cython implements a superset of the Python language with which you are able to write C and C++ modules for Python. Cython also allows you to call functions from compiled C libraries. Using Cython allows you to take advantage of Python’s strong typing of variables and operations. Here’s an example of strong typing with Cython: def primes(int kmax): """Calculation of prime numbers with additional Cython keywords""" implementation of an algorithm to find prime numbers has some additional keywords compared to the next one, which is implemented in pure Python: def primes(kmax): """Calculation of prime numbers in standard Python syntax""" p = range(1000) result = [] if kmax > 1000: kmax = 1000 k = 0 n = 2 while k < kmax: i = 0 while i < k and n % p[i] != 0: i = i + 1 if i == k: p[k] = n k = k + 1 result.append(n) n = n + 1 return result Notice that in the Cython version you declare integers and integer arrays to be compiled into C types while also creating a Python list: def primes(int kmax): """Calculation of prime numbers with additional Cython keywords""" cdef int n, k, i cdef int p[1000] result = [] def primes(kmax): """Calculation of prime numbers in standard Python syntax""" p = range(1000) result = [] What is the difference? In the upper Cython version you can see the declaration of the variable types and the integer array in a similar way as in standard C. For example cdef int n,k,i in line 3. This additional type declaration (i.e. integer) allows the Cython compiler to generate more efficient C code from the second version. While standard Python code is saved in *.py files, Cython code is saved in *.pyx files. What’s the difference in speed? Let’s try it! import time #activate pyx compiler import pyximport pyximport.install() #primes implemented with Cython import primesCy #primes implemented with Python import primes print "Cython:" t1= time.time() print primesCy.primes(500) t2= time.time() print "Cython time: %s" %(t2-t1) print "" print "Python" t1= time.time() print primes.primes(500) t2= time.time() print "Python time: %s" %(t2-t1) These lines both need a remark: import pyximport pyximport.install() The pyximport module allows you to import *.pyx files (e.g., primesCy.pyx) with the Cython-compiled version of the primes function. The pyximport.install() command allows the Python interpreter to start the Cython compiler directly to generate C code, which is automatically compiled to a *.so C library. Cython is then able to import this library for you in your Python code, easily and efficiently. With the time.time() function you are able to compare the time between these 2 different calls to find 500 prime numbers. On a standard notebook (dual core AMD E-450 1.6 GHz), the measured values are: Cython time: 0.0054 seconds Python time: 0.0566 seconds And here is the output of an embedded ARM beaglebone machine: Cython time: 0.0196 seconds Python time: 0.3302 seconds Concurrency¶ Concurrent.futures¶ The concurrent.futures module is a module in the standard library that provides a “high-level interface for asynchronously executing callables”. It abstracts away a lot of the more complicated details about using multiple threads or processes for concurrency, and allows the user to focus on accomplishing the task at hand. The concurrent.futures module exposes two main classes, the ThreadPoolExecutor and the ProcessPoolExecutor. The ThreadPoolExecutor will create a pool of worker threads that a user can submit jobs to. These jobs will then be executed in another thread when the next worker thread becomes available. The ProcessPoolExecutor works in the same way, except instead of using multiple threads for its workers, it will use multiple processes. This makes it possible to side-step the GIL; however, because of the way things are passed to worker processes, only picklable objects can be executed and returned. Because of the way the GIL works, a good rule of thumb is to use a ThreadPoolExecutor when the task being executed involves a lot of blocking (i.e. making requests over the network) and to use a ProcessPoolExecutor executor when the task is computationally expensive. There are two main ways of executing things in parallel using the two Executors. One way is with the map(func, iterables) method. This works almost exactly like the builtin map() function, except it will execute everything in parallel. from concurrent.futures import ThreadPoolExecutor import requests def get_webpage(url): page = requests.get(url) return page pool = ThreadPoolExecutor(max_workers=5) my_urls = ['']*10 # Create a list of urls for page in pool.map(get_webpage, my_urls): # Do something with the result print(page.text) For even more control, the submit(func, *args, **kwargs) method will schedule a callable to be executed ( as func(*args, **kwargs)) and returns a Future object that represents the execution of the callable. The Future object provides various methods that can be used to check on the progress of the scheduled callable. These include: - cancel() - Attempt to cancel the call. - cancelled() - Return True if the call was successfully cancelled. - running() - Return True if the call is currently being executed and cannot be cancelled. - done() - Return True if the call was successfully cancelled or finished running. - result() - Return the value returned by the call. Note that this call will block until the scheduled callable returns by default. - exception() - Return the exception raised by the call. If no exception was raised then this returns None. Note that this will block just like result(). - add_done_callback(fn) - Attach a callback function that will be executed (as fn(future)) when the scheduled callable returns. from concurrent.futures import ProcessPoolExecutor, as_completed def is_prime(n): if n % 2 == 0: return n, False sqrt_n = int(n**0.5) for i in range(3, sqrt_n + 1, 2): if n % i == 0: return n, False return n, True PRIMES = [ 112272535095293, 112582705942171, 112272535095293, 115280095190773, 115797848077099, 1099726899285419] futures = [] with ProcessPoolExecutor(max_workers=4) as pool: # Schedule the ProcessPoolExecutor to check if a number is prime # and add the returned Future to our list of futures for p in PRIMES: fut = pool.submit(is_prime, p) futures.append(fut) # As the jobs are completed, print out the results for number, result in as_completed(futures): if result: print("{} is prime".format(number)) else: print("{} is not prime".format(number)) The concurrent.futures module contains two helper functions for working with Futures. The as_completed(futures) function returns an iterator over the list of futures, yielding the futures as they complete. The wait(futures) function will simply block until all futures in the list of futures provided have completed. For more information, on using the concurrent.futures module, consult the official documentation. threading¶ The standard library comes with a threading module that allows a user to work with multiple threads manually. Running a function in another thread is as simple as passing a callable and its arguments to Thread’s constructor and then calling start(): from threading import Thread import requests def get_webpage(url): page = requests.get(url) return page some_thread = Thread(get_webpage, '') some_thread.start() To wait until the thread has terminated, call join(): some_thread.join() After calling join(), it is always a good idea to check whether the thread is still alive (because the join call timed out): if some_thread.is_alive(): print("join() must have timed out.") else: print("Our thread has terminated.") Because multiple threads have access to the same section of memory, sometimes there might be situations where two or more threads are trying to write to the same resource at the same time or where the output is dependent on the sequence or timing of certain events. This is called a data race or race condition. When this happens, the output will be garbled or you may encounter problems which are difficult to debug. A good example is this Stack Overflow post. The way this can be avoided is by using a Lock that each thread needs to acquire before writing to a shared resource. Locks can be acquired and released through either the contextmanager protocol (with statement), or by using acquire() and release() directly. Here is a (rather contrived) example: from threading import Lock, Thread file_lock = Lock() def log(msg): with file_lock: open('website_changes.log', 'w') as f: f.write(changes) def monitor_website(some_website): """ Monitor a website and then if there are any changes, log them to disk. """ while True: changes = check_for_changes(some_website) if changes: log(changes) websites = ['', ... ] for website in websites: t = Thread(monitor_website, website) t.start() Here, we have a bunch of threads checking for changes on a list of sites and whenever there are any changes, they attempt to write those changes to a file by calling log(changes). When log() is called, it will wait to acquire the lock with with file_lock:. This ensures that at any one time, only one thread is writing to the file.
https://docs.python-guide.org/scenarios/speed/
2020-01-17T16:47:38
CC-MAIN-2020-05
1579250589861.0
[array(['https://d33wubrfki0l68.cloudfront.net/f33b6d6a18c20fd1a83677c8ea09be0a08e43231/e3c2f/_images/33175625804_e225b90f3e_k_d.jpg', 'https://d33wubrfki0l68.cloudfront.net/f33b6d6a18c20fd1a83677c8ea09be0a08e43231/e3c2f/_images/33175625804_e225b90f3e_k_d.jpg'], dtype=object) ]
docs.python-guide.org
This page exists within the Old ArtZone Wiki section of this site. Read the information presented on the linked page to better understand the significance of this fact. Author: White Knight Tools Needed Support Files You will need a pair of RED-BLUE lens glasses to view the Images created by this process. Here is the URL for a good Quality set of Glasses Create a scene in Bryce, or your Favorite rendering program. Get a good Camera Position and save it using one of the Camera Dots. Render this image and save it as RightEye. Open the Camera Control and change the camera Settings. Adjust the X position more Positive (.1 to 2.0 ) depends on distance from the subject Save this camera Position to a camera Dot Render this Image and save it as LeftEye. By shifting the Camera position along the X axis, you have created slight change in the Camera location and also the perspective of the Image. These two perspectives will be used as the right eye view, and the left eye view, and combined into or 3D Red-Blue image as described in the Photo Shop portion below. You will need a pair of RED-BLUE lens glasses to view the Images created by this process. RED Lens over the Left Eye, and BLUE Lens over the Right Eye. Two Support Files, One is the Left Eye image, and the other is the Right Eye image Open Photoshop. Image Mode must be RGB For this to work. (note: Not all color images will work. Gray Scale Images work Best) (Convert each image to grayscale, then convert back to RGB, the image will still be shades of Gray, but there will be Separate Red, Green and Blue Channels to work with.) Open the Left and Right Images. Select the Left Image, Copy the Whole Image. Open a New Image. Switch to the default colors for Foreground and Background (black / white) Paste the Copy of the Left image into Layer 1 of the New Image. Close the Left image. Select the Right Image, Copy the Whole Image. Paste the Copy of the Right image into Layer 2 of the New Image. Close the Right image. Select Layer 1 or the Left Image. Layer 1 is selected. Layer 2 is turned off. Then the Channel Pallet is selected. The RED Channel is then selected. Ctrl-A to select the whole Channel, then press Delete. (note: the Default Foregraound and Background colors must be set to Black and White for this to work) The RED Channel of the Left Image is Erased (filled with White), Leaving only the Blue and Green Channels intact. Now Select the RGB channel. This Image should be very Red. Then go back to the Layers Pallet. Layer 2 is selected. Layer 1 is Hidden. The Select the Channels Pallet. Select the Blue Channel and Erase or Fill it with White. Select the Green Channel and Erase or Fill it with White. This leavs only the RED channel intact. Now Select the RGB channel. This Image should be very Blue-Green. Select the Layers Channel Again. Un-Hide Layer 1, but make Sure Layer 2 is currently selected. Change the Blending Mode of Layer 2 to MULTIPLY. This will combine the Red and Blue Images. Now is the Time to Put on your RED-BLUE Glasses. RED Lense over the LEFT eye, BLUE Lens over the RIGHT eye. You Should now See your 3D image pop out at you. We will now fix and adjust the image in the next step. When you View the Image with the Red Blue glasses, each eye will only see the filtered image for that eye, creating the effect of 3D. If the Camera Position was shifted too far along the X Axis when the images were rendered, then the Separation between the RED and BLUE images Might be too Great, Causing Eye Strain to see the 3D effect. This is Fixed very easily, by Selecting either Layer 1 or Layer 2, and then using the Nudge Tool. Nudge the Whole layer Left or Right to Decrease the Amount of Separation. This adjustment is best Done While Wearing your Red-Blue Glasses, so you can tell when the 3D effect POPS into View. Once you are Happy with the 3D effect, you can Flatten the Image, Crop it to Remove any Blue or Red Fringes on the Sides, and then Save it. Important: Only Nudge or shift to the LEFT or RIGHT, Never Nudge or Shift UP and DOWN. The Whole 3D effect is Controlled by the Amount of Shift Left or Right between the Red and Blue images. The More the RED is shifted to the Left, the More the Image will Seem to Jump Off of the Screen, or the closer to you it will appear. The More the RED is shifted to the Right, the More the Image will Seem to Sink into the Screen , or the Further away from you it will appear. Most 3D effects are best viewed when the red is left shifted so the image appears to pop out from the screen. A brief primer on how the RED-BLUE process works. Our Brain Processes Two Separate Images, one from Each Eye, to Give us Depth Perception. If you Focus on a Point on a Wall or an Object, and then Close one eye, and then Open and Close the Other Eye, the Image you See will seem to Shift slightly from Left to Right. This is because Each Eye, has a slightly different Line of Sight to the Object. Eyes are spaced 2 to 3 inches apart. If you draw a straight line from each eye to the object you are focusing on, there will be two different Paths, or Lines of sight to that object. Now Pretend to Place a flat Screen between you and the Object. The Lines of Sight from each Eye will Intersect this Flat Screen at Two Different Points. If we Color the Left Eye's Point Red, and we Color the Right Eye's Point Blue, we will Have a Separate RED and BLUE Point, for any given Focus Point in our Line of Sight. If we are to Move or Scan our Line of Sight in a Horizontal Line from one side of the Screen to the Other, and Create a MAP of all of the RED and BLUE Points, we would have a single SCAN LINE for our Image. Areas where the RED and BLUE colors intersect or overlap, must be Blended. The exact Shade or intensity of RED and BLUE is determined by the Color of the Focus Point, or the Shade of Gray of the Focus Point. Repeat this same Horizontal Scan Process for each Vertical step, in our image, and we can combine all of these Horizontal Scan Lines into one Image. This is the Reason why, we can only Shift the RED and BLUE separation Left or Right. We are only changing the Horizontal Scan Line. If we shifted the RED and BLUE up or Down, we would Distort the Image beyond recognition. Not to mention give our self a Headache. The same thing happens if the Amount of separation becomes to great. Our eyes try to Cross Focus, and can cause a headache. See the Sample Drawing for a Visual diagram of the Above description. In the Diagram, the Separation has be greatly exaggerated, the red lines show the left eye line of sight, and the blue lines show the right eye line of sight.
http://docs.daz3d.com/doku.php/artzone/pub/tutorials/otherapps/otherapps-pspphoto07
2020-01-17T16:16:33
CC-MAIN-2020-05
1579250589861.0
[]
docs.daz3d.com
Table of Contents Product Index This stunning set of Glass, Ceramic and Metal Vases for Iray will become your “go to” product for that finishing touch of detail. With a wide variety of shapes, colors and materials you will have just the right piece to add beauty and elegance to all of your scenes. This set includes Floor Vases, Small Accent Vases, Standard Flower Vases and many in between. A total of 25 beautiful vases, modeled from real life.
http://docs.daz3d.com/doku.php/public/read_me/index/24055/start
2020-01-17T16:03:10
CC-MAIN-2020-05
1579250589861.0
[]
docs.daz3d.com
Testing the Application Before you release your Dynamics NAV application, you must test its functionality. Testing is an iterative process. It is important to create repeatable tests, and it is helpful to create tests that can be automated. This topic describes the features in Microsoft Dynamics NAV 2018 that help you test the business logic in your application and some best practices for testing your Dynamics NAV application. Test Features Microsoft Dynamics NAV 2018 includes the following features to help you test your application: Test codeunits Test runner codeunits Test pages UI handlers ASSERTERROR statement Test with permission sets For more information, see Testing with Permission Sets. Test Codeunits You write test functions as C/AL code in the test codeunits. When a test codeunit runs, it executes the OnRun function, and then executes each test function in the codeunit. By default, each test function runs in a separate database transaction, but you can use the TransactionModel Property on test functions and the TestIsolation Property on test runner codeunits to control the transactional behavior. By default, the results of a test codeunit are displayed in a message window, but you can use the OnAfterTestRun Trigger on a test runner codeunit to capture the results. The outcome of a test function is either SUCCESS or FAILURE. If any error is raised by either the code that is being tested or the test code, then the outcome is FAILURE and the error is included in the results log file. Even if the outcome of one test function is FAILURE, the next test functions are still executed. The functions in a test codeunit are one of the following types: Test function Handler function Normal function For more information, see How to: Create Test Codeunits and Test Functions. Test Runner Codeunits You use test runner codeunits to manage the execution of test codeunits and to integrate with other test management, execution, and reporting frameworks. By integrating with a test management framework, you can automate your tests and enable them to run unattended. Test runner codeunits include the following triggers: - You can use these triggers to perform preprocessing and postprocessing, such as initialization or logging test results. If you implement the OnBeforeTestRun trigger, then it executes before each test function executes. If you implement the OnAfterTestRun trigger, then it executes after each test function executes and also suppresses the automatic display of the results message. Note The OnBeforeTestRun and OnAfterTestRun triggers are optional. By default, they are not available on a test runner codeunit. To implement these triggers, you must manually add them as functions and you must specify the correct signature. Warning The OnBeforeTestRun and OnAfterTestRun triggers always run in their own transactions, regardless of the value of the TestIsolation Property, the value of the TransactionModel Property, or the outcome of a test function. For more information, see How to: Create a Test Runner Codeunit. Test Pages Test pages mimic actual pages but do not present any UI on a client computer. Test pages let you test the code on a page by using C/AL to simulate user interaction with the page. There are two types of test pages: TestPage, which is a regular page and can be any kind of page. This includes page parts or subpages. TestRequestPage, which represents the request page on a report. You can access the fields on a page and the properties of a page or a field by using the dot notation. You can open and close test pages, perform actions on the test page, and navigate around the test page by using C/AL functions. For more information, see Testing Pages. UI Handlers To create tests that can be automated, you must handle cases when user interaction is requested by code that is being tested. UI handlers run instead of the requested UI. UI handlers provide the same exit state as the UI. For example, a test function that has a FunctionType of ConfirmHandler handles CONFIRM function calls. If code that is being tested calls the CONFIRM function, then the ConfirmHandler function is called instead of the CONFIRM function. You write code in the ConfirmHandler function to verify that the expected question is displayed by the CONFIRM function and you write C/AL code to return the relevant reply. The following table describes the available UI handlers. You create a specific handler for each page that you want to handle and a specific report handler for each report that you want to handle. If you run a test codeunit from a test runner codeunit, then any unhandled UI in the test functions of the test codeunit causes a failure of the test. If you do not run the test codeunit from a test runner codeunit, then any unhandled UI is displayed as it typically would. For more information, see How to: Create Handler Functions. ASSERTERROR Keyword When you test your application, you should test that your code performs as expected under both successful and failing conditions. These are called positive and negative tests. To test how your application performs under failing conditions, you can use the ASSERTERROR keyword. The ASSERTERROR keyword specifies that an error is expected at run time in the statement that follows the ASSERTERROR keyword. If a simple or compound statement that follows the ASSERTERROR keyword causes an error, then execution successfully continues to the next statement in the test function. If a statement that follows the ASSERTERROR keyword does not cause an error, then the ASSERTERROR statement itself fails with an error, and the test function that is running produces a FAILURE result. For more information, see C/AL ASSERTERROR Statements. Testing Best Practices We recommend the following best practices for designing your application tests: Test code should be kept separate from the code that is being tested. That way, you can release the tested code to a production environment without releasing the test code. Test code should test that the code being tested works as intended both under successful and failing conditions. These are called positive and negative tests. The positive tests validate that the code being tested works as intended under successful conditions. The negative tests validate that the code being tested work as intended under failing conditions. In positive tests, the test function should validate the results of application calls, such as return values, state changes, or database transactions. In negative tests, the test function should validate that the intended errors occur, error messages are presented, and the data has the expected values. Automated tests should not require user intervention. Tests should leave the system in the same well-known state as when the test started so that you can re-run the test or run other tests in any order and always start from the same state. Test execution and reporting should be fast and able to integrate with the test management system so that the tests can be used as check-in tests or other build verification tests, which typically run on unattended servers. Create test functions that follow the same pattern: Initialize and set up the conditions for the test. Invoke the business logic that you want to test. Validate that the business logic performed as expected. Only use hardcoded values in tests when you really need it. For all other data, consider using random data. For example, you want to test the Ext. Doc. No. Mandatory field in the Purchases & Payables Setup table. To do this you need to create and post typical purchase invoice. The typical purchase invoice line specifies an amount. For most tests, it does not matter exactly what amount. For inspiration, see the use of the GenerateRandomCodefunction in the tests that are included in the TestToolkit folder on the Dynamics NAV product media. For more information, see Random Test Data. Monitor code coverage. For more information, see Code Coverage. See Also Application Test Automation Testing Pages How to: Run Automated ApplicationTests Walkthrough: Testing Purchase Invoice Discounts Walkthrough: Create a Test with Confirmation Dialog Random Test Data Feedback
https://docs.microsoft.com/en-us/dynamics-nav/testing-the-application
2020-01-17T16:53:40
CC-MAIN-2020-05
1579250589861.0
[]
docs.microsoft.com
Getting Started with Kubernetes and OpenShift These are the steps required to set up a Redis Enterprise Software Cluster with OpenShift. Prerequisites: - An OpenShift cluster installed (3.x or 4.x) with at least three nodes (each meeting the minimum requirements for a development installation - The kubectl package installed at version 1.9 or higher - The OpenShift cli installed Step 1: Login - Log in to your OpenShift account as a super admin (so you have access to all the default projects). Create a new project, fill in the name and other details for the project, and click Create. Click on “admin” (upper right corner) and then “Copy Login.” Paste the login command into your shell; it should look something like this: oc login –token=your$login$token Next, verify that you are using the newly created project. Type: oc project <your project name> This will shift to your project rather than the default project (you can verify the project you’re currently using with the oc project command). Step 2: Get deployment files Clone this repository, which contains the deployment files: git clone Specifically for the custom resource (cr) yaml file, you may also download and edit one of the files in the example folder. Step 3: Prepare your yaml files Let’s look at each yaml file to see what requires editing: The scc (Security Context Constraint) yaml defines the cluster’s security context constraints, which we will apply to our project later on. We strongly recommend not changing anything in this yaml file. Apply the file: oc apply -f scc.yaml You should receive the following response: securitycontextconstraints.security.openshift.io “redis-enterprise-scc” configured Now you need to bind the scc to your project by typing: oc adm policy add-scc-to-group redis-enterprise-scc system:serviceaccounts:your_project_name (If you do not remember your project name, run “oc project”) The bundle file includes several declarations: - rbac (Role-Based Access Control) defines who can access which resources. The Operator application requires these definitions to deploy and manage the entire Redis Enterprise deployment (all cluster resources within a namespace). These include declaration of rules, role and rolebinding. - crd declaration, creating a CustomResourceDefinition for your Redis Enterprise Cluster resource. This provides another API resource to be handled by the k8s API server and managed by the operator we will deploy next - operator deployment declaration, creates the operator deployment, which is responsible for managing the k8s deployment and lifecycle of a Redis Enterprise Cluster. Among many other responsibilities, it creates a stateful set that runs the Redis Enterprise nodes, as pods. The yaml contains the latest image tag representing the latest Operator version available. This yaml should be applied as-is, without changes. To apply it: kubectl apply -f openshift.bundle.yaml You should receive the following response: role.rbac.authorization.k8s.io/redis-enterprise-operator created serviceaccount/redis-enterprise-operator created rolebinding.rbac.authorization.k8s.io/redis-enterprise-operator created customresourcedefinition.apiextensions.k8s.io/redisenterpriseclusters.app.redislabs.com configured deployment.apps/redis-enterprise-operator created Now, verify that your redis-enterprise-operator deployment is running: kubectl get deployment -l name=redis-enterprise-operator A typical response will look like this: NAME READY UP-TO-DATE AVAILABLE AGE redis-enterprise-operator 1/1 1 1 0m36s If you’re deploying a service broker, also apply the sb_rbac.yaml file. The sb_rbac (Service Broker Role-Based Access Control) yaml defines the access permissions of the Redis Enterprise Service Broker. We strongly recommend not changing anything in this yaml file. To apply it, run: kubectl apply -f sb_rbac.yaml You should receive the following response: clusterrole.rbac.authorization.k8s.io/redis-enterprise-operator-sb configured clusterrolebinding.rbac.authorization.k8s.io/redis-enterprise-operator configured The redis-enterprise-cluster_rhel yaml defines the configuration of the newly created resource: Redis Enterprise Cluster. This yaml could be renamed your_cluster_name.yaml to keep things tidy, but this isn’t a mandatory step. This yaml can be edited to the required use case, however, the sample provided can be used for test/dev and quick start purposes. Here are the main fields you may review and edit: - name: “your_cluster_name” (e.g. “demo-cluster”) - nodes: number_of_nodes_in_the_cluster (Must be an uneven number of at least 3 or greater—here’s why) uiServiceType: service_type Service type value can be either ClusterIP or LoadBalancer. This is an optional configuration based on k8s service types. The default is ClusterIP. storageClassName: “gp2“ This specifies the StorageClass used for your nodes’ persistent disks. For example, AWS uses “gp2” as a default, GKE uses “standard” and Azure uses "default"). redisEnterpriseNodeResources: The compute resources required for each node. limits – specifies the max resources for a Redis node requests – specifies the minimum resources for a Redis node For example: limits cpu: “4000m” memory: 4Gi requests cpu: “4000m” memory: 4Gi The default (if unspecified) is 4 cores (4000m) and 4GB (4Gi).Note -Resource limits should equal requests. Learn why.. serviceBrokerSpec – enabled: <false/true> This specifies persistence for the Service Broker with an “enabled/disabled” flag. The default is “false.” persistentSpec: storageClassName: “gp2“ redisEnterpriseImageSpec: This configuration controls the Redis Enterprise version used, and where it is fetched from. This is an optional field. The Operator will automatically use the matching RHEL image version for the release. imagePullPolicy: IfNotPresent Repository: redislabs/redis versionTag: 5.2.10-22 The version tag, as it appears on your repository (e.g. on DockerHub). Step 4: Create your Cluster Once you have your_cluster_name yaml set, you need to apply it to create your Redis Enterprise Cluster: kubectl apply -f your_cluster_name.yaml Run kubectl get rec and verify that creation was successful (rec is a shortcut for “RedisEnterpriseClusters”). You should receive a response similar to the following: NAME AGE Your_cluster_name 17s Your Cluster will be ready shortly, typically within a few minutes. To check the cluster status, type the following: kubectl get pod You should receive a response similar to the following: Next, create your databases. Step 5: Create a database In order to create your database, we will log in to the Redis Enterprise UI. First, apply port forwarding to your Cluster: kubectl port-forward your_cluster_name-0 8443:8443Note - - your_cluster_name-0 is one of your cluster pods. You may consider running the port-forward command in the background. - The Openshift UI provides tools for creating additional routing options, including external routes. These are covered in RedHat Openshift documentation. Next, create your database. Open a browser window and navigate to localhost:8443 In order to retrieve your password, navigate to the OpenShift management console, select your project name, go to Resources->Secrets->your_cluster_name Retrieve your password by selecting “Reveal Secret.”Warning -Do not change the default admin user password in the Redis Enterprise web UI. Changing the admin password impacts the proper operation of the K8s deployment. Follow the interface’s instructions to create your database.
https://docs.redislabs.com/latest/platforms/openshift/
2020-01-17T17:19:22
CC-MAIN-2020-05
1579250589861.0
[array(['../../images/rs/getting-started-kubernetes-openshift-image6.png', 'getting-started-kubernetes-openshift-image6'], dtype=object) ]
docs.redislabs.com
Public Registration Public registration will reduce the time for the administrator to take actions such as registering a new user. Your customers / contractors will be able to register themselves on the system. You can enable public registration on the Configuration/User registration/Public registration page. In the settings you need to select a group of users to be assigned to the registered users. There are also a number of options for design the registration page. Registered users receive a notification to the mail that contains the login details. It is also possible to notify the administrator about a new user. To protect against spam, reCAPTCHA is provided, which must be enabled on the "Security Settings" page.
https://docs.rukovoditel.net/index.php?p=14
2020-01-17T17:37:25
CC-MAIN-2020-05
1579250589861.0
[array(['img/1562390234_user_registration_en.png', None], dtype=object) array(['img/1562390281_user_registration_form_en.png', None], dtype=object)]
docs.rukovoditel.net
In order to add a photo or image to a Messages email, You're going to need to learn a little coding. Don't worry, I'm not an engineer either! I promise you can do this:) If you need help, just open a live chat and we can help you out. Step 1: Upload your images to the File Vault. Tips: - Click the "share" button and paste the link into a text document. You'll need this link in a few minutes. - Don't delete this file. If you delete the file, the image will not show up in peoples email. Step 2: Create a new message Step 3: Click the </> button in the Messages tool bar to open the code editor. Text Editor: Code Editor: If you look closely, you will see all the text you wrote surrounded by code. Step 4: Place you cursor at the very beginning of the email, before the first <DIV>, hit enter twice to give yourself some room to work. Then, move your cursor back to the top of the editor. Step 5: Copy and Paste the following string into the code editor. <div style="text-align: center;"><img src="YOUR FILE VAULT LINK HERE" style="width:100px;"></div> Step 6: Replace YOUR FILE VAULT LINK GOES HERE with your full File Vault link (Step 1) including the https part. Also KEEP THE QUOTATION MARKS around the link. Tip: Change the image size by increasing or decreasing the width from 100px. Tip: Before you send an email to your customers, send a test email to yourself and see how it looks on both desktop and mobile. Step 7: Click </> button to preview the image in your text editor. Code Editor: Text Editor: Add a Floating Image If you want to add an image to the middle of your email, just add the code where you want it to show up. Example: I can add an image below the introduction "Hi Everyone," and wrap the text around it. Float Left <div><img src="" style="width:100px; float: left; padding-right: 15px;"> Code Editor: Text Editor: For more advanced options, visit
https://docs.scoutforpets.com/en/articles/2580826-add-an-image-messages
2020-01-17T16:18:04
CC-MAIN-2020-05
1579250589861.0
[array(['https://downloads.intercomcdn.com/i/o/91940291/c7d4b95e9c50a3c3914239bf/Screen+Shot+2018-12-18+at+3.33.15+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/91940632/d61881ca976d4b8a4c16b658/Screen+Shot+2018-12-18+at+3.34.15+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/91971354/270b685ba77c1851a9d4ed90/Screen+Shot+2018-12-18+at+6.10.26+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/91971255/794185382832b588ef1146c9/Screen+Shot+2018-12-18+at+6.09.43+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/91974730/d82bd271e5f689e4a16c1273/Screen+Shot+2018-12-18+at+6.38.23+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/91975296/1c0101e21c7d16d844afa7c9/Screen+Shot+2018-12-18+at+6.44.30+PM.png', None], dtype=object) ]
docs.scoutforpets.com
Web session & page reloading watchdog timer¶ Visionect Software Suite version 2.6 introduced a new page reloading watchdog timer. To prevent the web app from “freezing”, users can now turn on a page reload time-out for each device session in the Management Interface. If the app freezes, it will simply be reloaded after the set time-out, making the client device usable again. A working app can reset the watchdog timer before it would time out. The app can start the timer (if it is not already running), can check the timeout value which was used to start the timer, and can stop the timer. - Start or restart the watchdog timer: - The function: okular.ReloadTimeout(seconds) starts up the timer, while stopping any old timers that might have already been running. The function accepts the timeout in seconds as a parameter. - The function: okular.GetReloadTimeout() checks the timeout value used when the timer started. The function returns null if the timer is disabled, or returns the timeout value in seconds used when the timer started. - Stop the watchdog timer: - The function: okular.ReloadTimeoutStop() stops the watchdog timer.
https://docs.visionect.com/AppDevelopment/webSessionAndPageReload.html
2020-01-17T16:18:33
CC-MAIN-2020-05
1579250589861.0
[]
docs.visionect.com
We are constantly updating, improving, and adding new features to Smart Online Order. Please check back soon for the latest update. Here are just some features we have planned for future release 1.) Option to store customers credit card data for faster checkout 2.) Create Mobile Apps to be downloaded from Android and iPhone App Store Please check back often for latest announcements
http://docs.smartonlineorder.com/docs/online-orders/upcoming-releases/
2017-10-17T01:41:42
CC-MAIN-2017-43
1508187820556.7
[]
docs.smartonlineorder.com
Data Binding Overview Since Q3 2013 the RadTileList control can be data bound in order to generate its tiles according to information from a datasource. Since Q2 2014 the Telerik TileList can be data bound client-side using simple JS data source or using the RadClientDataSource control. You can find more information and explanations about this in the Client-side Data Binding article. The datasource itself is an IEnumerable collection that contains the appropriate fields designated in the control's properties. The Supported Data Sources article offers more information on the subject. Generally, tiles are generated in the order, in which the information is received (e.g. row by row from the datatable), and thus the data is responsible for the layout. The developer can, however, take steps to prepare the desired Groups order beforehand as explained in the Defining Structure article. This article contains the following sections: Databinding Basics Here is an example of a databound RadTileList. Note how all databinding settings are contained in the inner <DataBindings> tag: <telerik:RadTileList <DataBindings> <CommonTileBinding TileType="RadImageTile" DataTitleTextField="UnitPrice" DataNameField="ProductName" DataGroupNameField="CategoryNames" DataNavigateUrlField="ProductUrls" /> <TextTileBinding DataTextField="TextDescription" /> <TilePeekTemplate> Some text that is present in all tiles <div class="productNamePeek"><%# DataBinder.Eval(Container.DataItem, "ProductName") %></div> </TilePeekTemplate> </DataBindings> </telerik:RadTileList> The CommonTileBinding section offers the properties that affect all tiles and are common for all tile types. There are three fallback properties that determine the most basic settings in case information from the datasource is missing for the TileType, Target and Shape. Each specific tile type has its own inner tag that exposes specific properties for the given type, e.g. the TextTileBinding tag, responsible for a RadTextTile offers the DataTextField property which indicates the field from the datasource with the text for the tile. The TilePeekTemplate inner tag offers the template that will be used for the Peek Template of each tile. Fields from the datasource can be evaluated here to add content specific for each tile, as well as content, common for all tiles (like the main HTML element wrapper that will provide dimensions, padding, fonts, etc.). The AppendDataBoundItems property determines whether any existing tiles (from the markup or created programmatically) will be cleared when the control is databound. The tiles generated from the databinding will be added to the existing tiles if this property is set to true. The default value is false, so any existing tiles will be removed upon databinding. DataBinding Properties Common tile databinding properties ContentTemplateTileBinding specific databinding properties RadIconTile specific databinding properties RadImageAndTextTile specific databinding properties RadImageTile specific databinding properties RadLiveTile specific databinding properties RadTextTile specific databinding properties PeekTemplate Databinding The PeekTemplate of the tiles can be defined by using the <TilePeekTemplate> (for server-side data binding) or the <ClientTilePeekTemplate> (for Client-side Data Binding) inner tag of the <DataBindings> tag. An arbitrary HTML string can be defined there that will be used in all tiles. This can be used to create a common wrapper with dimensions,paddings, font sizes, backgrounds and other common visual settings, as well as common content for all tiles. Databinding expressions can be evaluated there so that content from various fields in the datasource can be used there so that it is specific for each tile. Here is a simple example of binding the peek template: " /> <TilePeekTemplate> <div class="<%# DataBinder.Eval(Container.DataItem, "tileShapes") %>peekContainer"> Some text that is present in all databound tiles peek templates <%# DataBinder.Eval(Container.DataItem, "ProductName") %> </div> </TilePeekTemplate> </DataBindings> </telerik:RadTileList> RadContentTemplateTile ContentTemplate Databinding The ContentTemplate of a RadContentTemplateTile can be bound in a way similar to the PeekTemplate.Since this is specific to that tile type the ContentTemplate inner tag is available under the ContentTemplateTileBinding tag in DataBindings. When it comes to client-side data binding, the ClientContentTemplate inner tag can be used to define its layout, as described in the Client-side Data Binding article. In the template the developer can add arbitrary HTML and databinding expressions that will be used for all databound RadContentTemplateTiles. For example: " /> <ContentTemplateTileBinding> <ContentTemplate> <div class="<%# DataBinder.Eval(Container.DataItem, "tileShapes") %>contentContainer"> Some text that is present in all databound tiles' ContentTemplates <%# DataBinder.Eval(Container.DataItem, "ProductName") %> </div> </ContentTemplate> </ContentTemplateTileBinding> </DataBindings> </telerik:RadTileList>
https://docs.telerik.com/devtools/aspnet-ajax/controls/tilelist/data-binding/overview
2017-10-17T02:08:03
CC-MAIN-2017-43
1508187820556.7
[]
docs.telerik.com
System. Identity Model. Selectors Namespace Contains classes that implement authentication in the Windows Communication Foundation (WCF) claims-based identity model..
https://docs.microsoft.com/en-au/dotnet/api/system.identitymodel.selectors?view=netframework-4.8
2021-02-24T21:36:09
CC-MAIN-2021-10
1614178347321.0
[]
docs.microsoft.com
Microsoft + U.S. Partners = Good News The Worldwide Partner Conference is flying by. Partners have told me they are especially excited to participate in a Worldwide Partner Conference that’s more dynamic than ever. I’ve talked with dozens of partners attending the conference and I’ve been thrilled to hear how excited they are about Microsoft’s products, the great speakers, the networking opportunities, and the chance to share what they’ve learned with customers when they return home. Watching the connections grow between Microsoft employees and our partners, our two biggest assets, is supremely satisfying. There has been one consistent observation throughout my week in D.C. – the strengthening of relationships to the betterment of all attendees. That translates into an improved top to bottom experience for our customers; which is the ultimate goal. The major theme of this week, and a reflection of our biggest corporate imperative, is cloud computing. You’ve heard it already, “We are all in” and the lineup of products we highlighted this week offers a comprehensive approach to cloud computing that is amongst the strongest in the industry. Our strategy and approach to this IT transformation offers Microsoft partners the ability to deliver game-changing solutions to all customer types. Partners tell me they loved how Tuesday’s Cloud Day in the U.S. Lounge – and all the corresponding cloud events and activities – clarified the enormous value to partners. If you did not catch the other cloud events, don’t miss the “Partner Business Transformation: How to Ready Your Business for Cloud Services” session from 11:15 a.m. to 12:15 p.m. today. We started the conference with the Day of Giving, making it easy for volunteers to make a local impact – either through painting a mural at a school for homeless children, sewing a quilt for the child of a deployed soldier or donating blood. This day has become a favorite for participants because it is fun, rewarding and impactful. And as if that weren’t enough, yesterday’s guest keynote by President Bill Clinton wowed the audience. He’s a charismatic speaker and everyone I have talked to was inspired to think more broadly about how they can make a positive impact in their own communities. And, there’s more to come. Today is US Track Day and it’s devoted to celebrating U.S. partners. The U.S. product, sales and marketing teams will deliver a full day of content – 9:00 a.m. to 4:00 p.m. – with more than 40 U.S.-focused sessions. Today’s track sessions will take Microsoft’s worldwide strategies and tailor them to you and your business. Posted by Robert Youngjohns President, Microsoft North America Sales & Marketing and SVP
https://docs.microsoft.com/en-us/archive/blogs/microsoft_blog/microsoft-u-s-partners-good-news
2021-02-24T20:57:16
CC-MAIN-2021-10
1614178347321.0
[]
docs.microsoft.com
Support for OpenStack LBaaS¶ - date 2021-01-26 Starting with Release 3.1, TF provides support for the OpenStack Load Balancer as a Service (LBaaS) Version 2.0 APIs in the Liberty release of OpenStack. OpenStack Neutron LBaaS Version 2.0¶ Platform Support¶ Table 1 shows which TF with OpenStack release combinations support which version of OpenStack LBaaS APIs. Table 1: TF OpenStack platform Support for LBaaS Versions Using OpenStack LBaaS Version 2.0¶ The OpenStack LBaaS Version 2.0 extension enables tenants to manage load balancers for VMs, for example, load-balancing client traffic from a network to application services, such as VMs, on the same network. The LBaaS Version 2.0 extension is used to create and manage load balancers, listeners, pools, members of a pool, and health monitors, and to view the status of a resource. For LBaaS v2.0, the TF controller aggregates the configuration by provider. For example, if haproxy is the provider, the controller generates the configuration for haproxy and eliminates the need to send all of the load-balancer resources to the vrouter-agent; only the generated configuration is sent, as part of the service instance. For more information about OpenStack v2.0 APIs, refer to the section LBaaS 2.0 (STABLE) (lbaas, loadbalancers, listeners, health_monitors, pools, members), at. LBaaS v2.0 also allows users to listen to multiple ports for the same virtual IP, by decoupling the virtual IP address from the port. The object model has the following resources: Load balancer—Holds the virtual IP address Listeners—One or many listeners with different ports, protocols, and so on Pools Health monitors Support for Multiple Certificates per Listener¶ Multiple certificates per listener are supported, with OpenStack Barbican as the storage for certificates. OpenStack Barbican is a REST API designed for the secure storage, provisioning, and management of secrets such as passwords, encryption keys, and X.509 certificates. The following is an example CLI to store certificates in Barbican: - barbican --os-identity-api-version 2.0 secret store --payload-content-type='text/plain' --name='certificate' --payload="$(cat server.crt)" For more information about OpenStack Barbican, see:. Neutron Load-Balancer Creation¶ Note The Neutron LBaaS plugin is not supported in OpenStack Train release. From OpenStack Train release, neutron-lbaas is replaced by Octavia. The following is an example of Neutron load-balancer creation: - neutron net-create private-net - neutron subnet-create --name private-subnet private-net 10.30.30.0/24 - neutron lbaas-loadbalancer-create $(neutron subnet-list | awk '/ private-subnet / {print $2}') --name lb1 - neutron lbaas-listener-create --loadbalancer lb1 --protocol-port 443 --protocol TERMINATED_HTTPS --name listener1 --default-tls-container=$(barbican --os-identity-api-version 2.0 container list | awk '/ tls_container / {print $2}') - OpenStack Octavia LBaaS¶ Using Octavia Load-Balancer¶ Tungsten Fabric Release 2005 supports Octavia as LBaaS. The deployment supports RHOSP and Juju platforms. With Octavia as LBaaS, Tungsten Fabric is only maintaining network connectivity and is not involved in any load balancing functions. For each OpenStack load balancer creation, Octavia launches a VM known as amphora VM. The VM starts the HAPROXY when listener is created for the load balancer in OpenStack. Whenever the load balancer gets updated in OpenStack, amphora VM updates the running HAPROXY configuration. The amphora VM is deleted on deleting the load balancer. Tungsten Fabric provides connectivity to amphora VM interfaces. Amphora VM has two interfaces; one for management and the other for data. The management interface is used by the Octavia services for the management communication. Since, Octavia services are running in the underlay network and amphora VM is running in the overlay network, SDN gateway is needed to reach the overlay network. The data interface is used for load balancing the traffic. If the load balancer service is exposed to public, you must create the load balancer VIP in the public subnet. The load balancer members can be in the public or private subnet. You must create network policy between public network and private network if the load balancer members are in the private network. Octavia Load-Balancer Creation¶ The following is an example of Octavia load-balancer creation: openstack loadbalancer listener create --protocol HTTP --protocol-port 80 --name listener1 lb1 openstack loadbalancer show lb1 # Wait for the provisioning_status to be ACTIVE. openstack loadbalancer pool create --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP --name pool1 openstack loadbalancer healthmonitor create --delay 5 --timeout 2 --max-retries 1 --type HTTP pool1 openstack loadbalancer member create --subnet-id private --address 10.10.10.50 --protocol-port 80 pool1 openstack loadbalancer member create --subnet-id private --address 10.10.10.51 --protocol-port 80 pool1 - .. list-table:: Release History Table - header-rows 1 Release Description 2011 Tungsten Fabric Release 2011 supports Octavia as LBaaS.
https://docs.tungsten.io/en/latest/tungsten-fabric-installation-and-upgrade-guide/lbaas-v2-vnc.html
2021-02-24T19:49:10
CC-MAIN-2021-10
1614178347321.0
[]
docs.tungsten.io
Algebra. Numbers Complex numbers To define and operate with complex numbers, you need to enter the imaginary unit using the icon in the section Symbols of the Menu or using the keyboard shortcut Ctrl+J. Note the imaginary number appears in green, otherwise CalcMe is interpreting it as a variable. From now on, you can naturally write complex numbers in binomial form and find their real and imaginary part, their norm, their argument, their polar representation, their conjugate and their inverse using the different available commands. INTERACTIVE DEMO In addition, you can also operate with these numbers, starting with perhaps the simplest operations: addition and subtraction. Note adding or subtracting complex numbers ends up being the same as using plain vectors. INTERACTIVE DEMO However, when multiplying two complex numbers , you have to imagine you are multiplying two polynomials and applying the distributive property. As , the real part of the product will be the product of real parts minus the product of imaginary parts , meanwhile, the imaginary part will be the sum of the cross products . INTERACTIVE DEMO Lastly, when you divide complex numbers, you have to eliminate the complex part of the divisor (the denominator) by multiplying the numerator and denominator by the denominator's conjugate. That reduces the problem to dividing a complex number (i.e., the numerator) by a real value (i.e., the denominator). INTERACTIVE DEMO Elements of linear algebra Vectors and matrices To get started with elements of linear algebra, you first need to see how vectors and arrays are defined in CalcMe. You can define vectors in three different ways: by using the vector command, using the icon you can find in the Linear Algebra section of the Menu, or by typing it manually. The last two are the most common. You can also define vectors as variables, perform basic operations between them, and access specific elements. INTERACTIVE DEMO If you want to see how to create vectors using the command, see its dedicated page. On the other hand, if you want to define a matrix, you have only two options, but they are more than enough. You can use the icon next to the vector in Linear Algebra or enter it manually, as a vector of vectors. Additionally, once the matrix is created, you can resize it by inserting or removing rows and columns with more icons you can find in the Menu. Finally, as with vectors, you can also perform basic operations between matrices and access their elements. INTERACTIVE DEMO Operations with matrices Apart from the basic operations already seen as the sum of matrices or the product of a matrix and a scalar, CalcMe allows you to perform a wide range of actions given a set of matrices. First, the product between matrices. INTERACTIVE DEMO Continuing with classical operations with matrices, you can also find commands in the Linear Algebra section of the Menu that allows you to calculate the determinant, inverse, and transpose of a given matrix. In the same way, you can also easily create an identity matrix of the dimension you want. Finally, you can find the range, the kernel, and the image of a matrix by using the corresponding commands. INTERACTIVE DEMO Change of base Given two bases of a vector space , in our case and , the matrix whose columns correspond to the coordinates of the vectors of in the base , is called the matrix of the change of base from to . CalcMe allows you to find this matrix and, consequently, the vector coordinates of in the base . INTERACTIVE DEMO Equation of a 2-D and 3-D straight line To create a straight line, you need to specify a point on the line and the direction it will follow (i.e., its slope). With these ingredients (though they aren't the only possible ones) and the line command, you can easily create and represent a straight line in the plane. INTERACTIVE DEMO With a similar syntax, you can extend these actions into space and create and render a 3-D straight line. Notice that the line appears as an intersection of two planes. INTERACTIVE DEMO If you want to see the different parameters you can use to create a line, see its dedicated page. Equation of a plane On the other hand, to create a plane, you will need a point and two director vectors. With these ingredients (though they aren't the only possible ones) and the plane command, you can create and render a plane in the space. INTERACTIVE DEMO If you want to see the different parameters you can use to create a plane, see its dedicated page. Systems of linear equations Solution, solution with degrees of freedom To solve linear systems of equations, there are essentially two methodologies: entering the equations manually by separating them using the New Line action (Shift+Enter) or by using the solve command. If you want to assign these solutions to a variable, you need to consider the singular notation to use. In addition, in the case of an indeterminate compatible system, you will be able to see what value the solution takes depending on the dependent variable. INTERACTIVE DEMO Resolving systems like the ones seen above gives you a wide range of options when you have to solve problems with several variables. Given the following diagram showing the data flow (in MB per hour) between six routers () on a network You can find the data flow between each pair of directly linked routers () if you consider the data flow that goes through each of them is the same that comes out of it. You also know the total inbound data flow is 1100 MB (for and ) and equal to the outgoing flow (for and ). INTERACTIVE DEMO Adding a couple of conditions, such as that the flow from to is 200 MB per hour and that the flow from to is 500 MB hour, you can find a single solution for the system of linear equations. INTERACTIVE DEMO Intersection of planes All the linear systems seen above can be interpreted geometrically as a set of planes in the space that will intersect at a point (determinate compatible system), in a straight line (indeterminate compatible system with a degree of freedom), in a plane (indeterminate compatible system with two degrees of freedom) or nowhere (incompatible system). When the rank of the plane coefficients' matrix matches the range of the extended matrix, the planes intersect at one or more points (the corresponding linear system solution set). Also, if this range is the same as the number of unknown variables, that intersection will be a single point. INTERACTIVE DEMO On the other hand, if these ranges coincide but are smaller than the number of unknown variables, the planes will intersect at an infinite number of points. If the system has a degree of freedom, they intersect in a straight line; if there are two, in a plane. INTERACTIVE DEMO This situation may also occur when considering more than two planes. In fact, there are infinities that go through a given line. INTERACTIVE DEMO Lastly, if the planes are parallel (same direction vectors but different points), there will be no intersection point corresponding to the incompatible system. INTERACTIVE DEMO Similarly, given two parallel planes, any other plane we add to the situation may intersect (or not) with the initial planes but never get the system to have a solution. INTERACTIVE DEMO Linear maps Given an endomorphism and its associated matrix , you can use the commands image and kernel to find the set of vectors that are the image of any of the initial vectors () and the set of vectors whose image for is 0 (). INTERACTIVE DEMO Alternatively, you can also find the eigenvalues and the corresponding eigenvectors of the endomorphism using the commands eigenvalues and eigenvectors: As you know, is verified. INTERACTIVE DEMO In the same way, you can calculate these values and eigenvectors from the characteristic polynomial. Once calculated, you can use it to find the diagonal matrix mapping on the base of your own vectors. INTERACTIVE DEMO One of the applications of this decomposition is to find high-grade powers of the initial matrix. As , if you want to calculate you just have to raise the diagonal matrix (i.e., the eigenvalues) to the nth power and multiply by the matrix and the matrix . INTERACTIVE DEMO Geometric transformations Translation Translating is a transformation that moves objects without causing them to deform since each point of the object is moved in the same direction and at the same distance. To define it, set the translation distance (both on the axis and the axis). INTERACTIVE DEMO In fact, you can apply this transformation to more complex objects than points. INTERACTIVE DEMO Rotation If you apply a rotation to a point , its position changes following a circular trajectory in the plane. In order to define it, you must set the rotation angle and the pivot or rotation point. If you don't indicate it, CalcMe will interpret the pivot as the origin. INTERACTIVE DEMO As you have seen before, you can apply this transformation to more complex objects than points. INTERACTIVE DEMO Or specifying another rotation point different from the origin. INTERACTIVE DEMO Scaling Scaling a point using a fixed point , implies multiplying by some factors the horizontal and vertical distances between and . If you don't indicate it, CalcMe will interpret that the point is the origin. INTERACTIVE DEMO As you have seen before, you can apply this transformation to a more complex object. INTERACTIVE DEMO Or by specifying another fixed point other than the origin. In this case, you will need to start by applying a translation so that the fixed point matches the coordinate origin. Once you've made the move, you have to apply the scaling to finally undo the translation. INTERACTIVE DEMO Previous: Basic mathematics instructionsNext: Instructions for mathematical analysis Table of Contents Numbers Elements of linear algebra Systems of linear equations Linear maps Geometric transformations
https://docs.wiris.com/en/calc/basic_guide_uoc/algebra
2021-02-24T21:04:41
CC-MAIN-2021-10
1614178347321.0
[]
docs.wiris.com
Integrating Aristotle-MDR with a Django project¶ Note: this guide relies on some experience with Python and Django. For new users looking at getting a site up and running look at the Easy installer documentation. The first step is starting a project as described in the Django tutorial. Once this is done, follow the steps below to setup Aristotle-MDR. Add “aristotle_mdr” to your INSTALLED_APPS setting like this: INSTALLED_APPS = ( ... 'haystack', 'aristotle_mdr', ... ) To ensure that search indexing works properly haystack must be installed before aristotle_mdr. If you want to take advantage of Aristotle’s WCAG-2.0 access-key shortcut improvements for the admin interface, make sure it is installed before the django admin app. Include the Aristotle-MDR URLconf in your project urls.py. Because Aristotle will form the majority of the interactions with the site, as well as including a number of URLconfs for supporting apps its recommended to included it at the server root, like this: url(r'^/', include('aristotle_mdr.urls')), Create the database for the metadata registry using the Django migrate command: python manage.py migrate Start the development server with python manage.py runserverand visit to see the home page. For a complete example of how to successfully include Aristotle, see the example_mdr directory.
http://aristotle-metadata-registry.readthedocs.io/en/latest/installing/integrate_with_django_project.html
2018-07-15T22:44:15
CC-MAIN-2018-30
1531676589022.38
[]
aristotle-metadata-registry.readthedocs.io
ListQueueTags List all cost allocation tags added, file a technical support request. For a full list of tag restrictions, see Limits Related to Queues in the Amazon Simple Queue Service Developer Guide. Note Cross-account permissions don't apply to this action. For more information, see see Grant Cross-Account Permissions to a Role and a User Name in the Amazon Simple Queue Service Developer Guide. Request Parameters For information about the parameters that are common to all actions, see Common Parameters. - QueueUrl The URL of the queue. Type: String Required: Yes Response Elements The following element is returned by the service. - Tag - Tag.N.Key (key) - Tag.N.Value (value) The list of all tags added to the specified queue. Type: String to string map Errors For information about the errors that are common to all actions, see Common Errors. Example Sample Request ?Action=ListQueueTags &Expires=2020-10-18T22%3A52%3A43PST &Version=2012-11-05 &AUTHPARAMS Sample Response <ListQueueTagsResponse> <ListQueueTagsResult> <Tag> <Key>QueueType</Key> <Value>Production</Value> </Tag> <Tag> <Key>Owner</Key> <Value>Developer123</Value> </Tag> </ListQueueTagsResult> <ResponseMetadata> <RequestId>a1b2c3d4-e567-8901-23f4-g5678901hi23</RequestId> </ResponseMetadata> </ListQueueTagsResponse> See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ListQueueTags.html
2018-07-15T23:19:26
CC-MAIN-2018-30
1531676589022.38
[]
docs.aws.amazon.com
Renaming the Drawings Although renaming your drawings is not mandatory, it can prove useful in maintaining a clear Network structure for your project. If you leave your drawing as is and do not rename it, your deformation subgroups will be automatically named according to the drawing numbering. If you plan to have several drawings using the same rig within an element, for instance, drawing substitution, then you should rename these extra drawings before starting your rig, or you will find that extra unused subgroups will be created automatically.
https://docs.toonboom.com/help/harmony-11/workflow-standalone/Content/_CORE/_Workflow/023_Deformation/059e_H2_Renaming_the_Drawings.html
2018-07-15T23:17:58
CC-MAIN-2018-30
1531676589022.38
[array(['../../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stage.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/draw.png', 'Toon Boom Harmony 11 Draw Online Documentation'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/sketch.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/controlcenter.png', 'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/scan.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stageXsheet.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/Stage/Deformation/Step/HAR9_04_Mouth_003.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/Stage/Deformation/Step/HAR11_rabbit_feet.png', None], dtype=object) array(['../../../Resources/Images/HAR/Stage/Deformation/Step/HAR_04_Renaming_foot.png', None], dtype=object) array(['../../../Resources/Images/HAR/Stage/Deformation/Step/HAR9_04_Mouth_001.png', None], dtype=object) array(['../../../Resources/Images/HAR/Stage/Deformation/Step/HAR9_04_Mouth_001b.png', None], dtype=object) ]
docs.toonboom.com
UPC-E1 Short Description UPC-E is a kind of UPC-A, which allows a more compact bar code by eliminating "extra" zeros. Since the resulting UPC-E bar code is about half the size of the UPC-A bar code, UPC-E is generally used on products with a very small packaging where a full UPC-A bar code does not fit. The UPC-E1 is a variation of UPC-E code with the number system set to "1". In the human readable string of the bar code the first digit signifies the number system (always 1 for this code type), the last digit is the check digit of the original UPC-A code. In the example below, the original UPC-A code is "14210000526". We should remove the leading "1" when assigning the string to the control's property, since the code format itself implies its presence. The checksum digit (1) is calculated automatically, and the symbology algorithm transforms the rest of the numeral string. The result is 425261, and it is encoded along with the number system prefix and the check digit into the scanner-readable form. Not every UPC-A code can be transformed into the UPC-E1. It must meet special requirements, and you may refer to UPC-E Symbology page for more information. Since the number system "1" is not used in regular UPC-A codes, the UPC-E1 symbology use is uncommon. Bar Code Properties The type of a bar code control's Symbology property is UPCE1Generator. The are no properties specific to the UPC-E1 bar code type. Examples The following code creates a UPC-E1 bar code and specifies its main properties. Imports System Imports System.Collections.Generic Imports System.Drawing.Printing Imports System.Windows.Forms Imports DevExpress.XtraPrinting.BarCode Imports DevExpress.XtraReports.UI ' ... Public Function CreateUPCE1BarCode(ByVal BarCodeText As String) As XRBarCode ' Create a bar code control. Dim barCode As New XRBarCode() ' Set the bar code's type to UPC-E1. barCode.Symbology = New UPCE1Generator() ' Adjust the bar code's main properties. barCode.Text = BarCodeText barCode.Width = 400 barCode.Height = 100 Return barCode End Function To add the XRBarCode to a report band, handle the report's XRControl.BeforePrint event.
https://docs.devexpress.com/XtraReports/11936/detailed-guide-to-devexpress-reporting/using-report-controls/using-bar-codes/upc-e1
2018-07-15T23:04:10
CC-MAIN-2018-30
1531676589022.38
[array(['/XtraReports/images/barcode-upc-e15694.png', 'Barcode - UPC-E1'], dtype=object) ]
docs.devexpress.com