content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Setting up your device Below you'll find four different ways to flash your device with Windows 10 IoT Core. Based on the chart included in the list of suggested boards for prototyping, follow the appropriate directions. Use the right column to navigate between these different methods of flashing. Important Do not use maker images for commercialization. If you are commercializing a device, you must use a custom FFU for optimal security. Learn more here. Important When the "format this disk" pop up comes up, do not format the disk. We are working on a fix for this issue. Using the IoT Dashboard (Raspberry Pi, MinnowBoard, NXP) Important The latest 64-bit firmware for MinnowBoard Turbot can be found on the MinnowBoard website (skip step 4 on the MinnowBoard site's instructions). Important NXP only supports custom images. If you're looking to flash a custom image, select "Custom" from the OS Build dropdown, follow the instructions here to create a basic image, and follow the rest of the instructions below to finish. Note Dashboard cannot be used used to setup the Raspberry Pi 3B+. If you have a 3B+ device, you must use the 3B+ technical preview. Please view the known limitations of the technical preview to determine if this is suitable for your development. Tip We recommend using a high-performance SD card, such as a SanDisk SD card, for increased stability as well as plugging your device into an external display to see the default application booting up. - Download the Windows 10 IoT Core Dashboard here. - Once downloaded, open the Dashboard and click on set up a new device and insert a SD card into your computer. - Fill out all of the fields as indicated. - Accept the software license terms and click Download and install. You'll see that Windows 10 IoT Core is now flashing your SD card. Using the IoT Dashboard (DragonBoard 410c) Tip We recommend plugging your device into an external display to see the default application booting up. Important If you're looking to flash a custom image, select "Custom" from the OS Build dropdown, follow the instructions here to create a basic image, and follow the rest of the instructions below to finish. Note If you're running into any audio-related issues with your DragonBoard, we advise that you read through Qualcomm's manual here. - Download the Windows 10 IoT Core Dashboard here. - Once downloaded, open the Dashboard and select "Qualcomm DragonBoard 410c". Then sign in as a Windows Insider. You need to be signed in as an insider in order to flash DragonBoard 410c. - Connect the Qualcomm board to the developer machine using a microUSB cable. -_1<< Flashing with eMMC (for DragonBoard 410c, other Qualcomm devices) - Download and install the DragonBoard Update Tool for your x86 or x64 machine. - Download the Windows 10 IoT Core DragonBoard FFU. - Double-click on the downloaded ISO file and locate the mounted Virtual CD-drive. This drive will contain an installer file (.msi); double-click on it. This creates a new directory on your PC under C:\Program Files (x86)\Microsoft IoT\FFU\in which you should see an image file, "flash.ffu". - Ensure your DragonBoard is in download mode by setting the first boot switch on the board to USB Boot, as shown below. Then, connect DragonBoard the host PC via a microUSB cable, then plug in the DragonBoard to a 12V (> 1A) power supply. - Start the DragonBoard Update Tool, which should detect that the DragonBoard is connect to your PC with a green circle. "Browse" to the DragonBoard's FFU that you downloaded, then click the Program button. - Click "Browse" again and select "rawprogram0.xml" that was generated in step 5. Then click the "Program" button. - Once the download is complete, disconnect the power supply and microUSB cable from the board and toggle the USB Boot switch back to OFF. Connect a HDMI display, a mouse, and a keyboard to the DragonBoard and rec-onnect the power supply. After a few minutes, you should see the Windows 10 IoT Core default application. Note Make sure the device is now booting from the eMMC memory by entering the BIOS setup again and switching the Boot Drive order to load from the Hard Drive instead of from the USB Drive. Flashing with eMMC (for Up Squared, other Intel devices) - Download and install the Windows Assessment and Deployment kit with the correlating version of Windows 10 you're running. - Insert a USB drive into your machine. - Create a USB-bootable WinPE image: - Start the Deployment and Imaging Tools Environment (C:\Program Files (x86)\Windows Kits\10\Assessment and Deployment Kit\Deployment Tools)as an administrator. - Create a working copy of the Windows PE files. Specify either x86, amd64 or ARM: Copype amd64 C:\WINPE_amd64 - Install Windows PE to the USB flash drive, specifying the WinPE drive letter below. More information can be found here. MMakeWinPEMedia /UFD C:\WinPE_amd64 P: - Download the Windows 10 IoT Core image by double-clicking on the downloaded ISO file and locating the mounted Virtual CD-drive. - This drive will contain an install file (.msi); double click it. This will create a new directory on your PC under C:\Program Files (x86)\Microsoft IoT\FFU\ in which you shoul d see an image file, "flash.ffu". - Download, unzip and copy the eMMC Installer script to the USB device's root directory, along with the device's FFU. - Connect the USB drive, mouse, and keyboard to the USB hub. Attach the HDMI display to your device, the device to the USB hub, and the power cord to the device. - Go to the BIOS setup of the device. Select Windows as the Operating system and set the device to boot from your uSB drive. When the system reboots, you will see the WinPE command prompt. Switch the WinPE prompt to the USB Drive. This is usually C: or D: but you may need to try other driver letters. - Run the eMMC Installer script, which will install the Windows 10 IoT Core image to the device's eMMC memory. When it completes, press any key and run wpeutil reboot. The system should boot into Windows 10 IoT Core, start the configuration process, and load the default application. Note Make sure the device is now booting from the eMMC memory by entering the BIOS setup again and switching the Boot Drive order to load from the Hard Drive instead of from the USB Drive. Connecting. Connecting to Windows Device Portal Use the Windows Device Portal to connect your device through a web browser. The device portal makes valuable configuration and device management capabilities available. Feedback We’d love to hear your thoughts. Choose the type you’d like to provide: Our feedback system is built on GitHub Issues. Read more on our blog.
https://docs.microsoft.com/en-gb/windows/iot-core/tutorials/quickstarter/devicesetup
2019-02-16T05:38:13
CC-MAIN-2019-09
1550247479885.8
[array(['../../media/devicesetup/dashboard-screenshot.jpg', 'Dashboard screenshot'], dtype=object) array(['../../media/devicesetup/db4.png', 'DragonBoard in flash mode'], dtype=object) array(['../../media/devicesetup/db1.png', 'DragonBoard in download mode'], dtype=object) ]
docs.microsoft.com
Copy data from MySQL using Azure Data Factory This article outlines how to use the Copy Activity in Azure Data Factory to copy data from a MySQL database. It builds on the copy activity overview article that presents a general overview of copy activity. Supported capabilities You can copy data from MySQL database to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the Supported data stores table. Specifically, this MySQL connector supports MySQL version 5.6 and 5.7. Prerequisites If your MySQL database is not publicly accessible, you need to set up a Self-hosted Integration Runtime. To learn about Self-hosted integration runtimes, see Self-hosted Integration Runtime article. The Integration Runtime provides a built-in MySQL driver starting from version 3.7, therefore you don't need to manually install any driver. For Self-hosted IR version lower than 3.7, you need to install the MySQL Connector/Net for Microsoft Windows version between 6.6.5 and 6.10.7 on the Integration Runtime machine. This 32 bit driver is compatible with 64 bit IR. MySQL connector. Linked service properties The following properties are supported for MySQL linked service: A typical connection string is Server=<server>;Port=<port>;Database=<database>;UID=<username>;PWD=<password>. More properties you can set per your case: Example: { "name": "MySQLLinkedService", "properties": { "type": "MySql", "typeProperties": { "connectionString": { "type": "SecureString", "value": "Server=<server>;Port=<port>;Database=<database>;UID=<username>;PWD=<password>" } }, "connectVia": { "referenceName": "<name of Integration Runtime>", "type": "IntegrationRuntimeReference" } } } Example: store password in Azure Key Vault { "name": "MySQLLinkedService", "properties": { "type": "MySql", "typeProperties": { "connectionString": { "type": "SecureString", "value": "Server=<server>;Port=<port>;Database=<database>;UID=<username>;" }, "password": { "type": "AzureKeyVaultSecret", "store": { "referenceName": "<Azure Key Vault linked service name>", "type": "LinkedServiceReference" }, "secretName": "<secretName>" } }, "connectVia": { "referenceName": "<name of Integration Runtime>", "type": "IntegrationRuntimeReference" } } } If you were using MySQL linked service with the following payload, it is still supported as-is, while you are suggested to use the new one going forward. Previous payload: { "name": "MySQLLinkedService", "properties": { "type": "MySql", MySQL dataset. To copy data from MySQL, set the type property of the dataset to RelationalTable. The following properties are supported: Example { "name": "MySQLDataset", "properties": { "type": "RelationalTable", "linkedServiceName": { "referenceName": "<MySQL linked service name>", "type": "LinkedServiceReference" }, "typeProperties": {} } } Copy activity properties For a full list of sections and properties available for defining activities, see the Pipelines article. This section provides a list of properties supported by MySQL source. MySQL as source To copy data from MySQL, set the source type in the copy activity to RelationalSource. The following properties are supported in the copy activity source section: Example: "activities":[ { "name": "CopyFromMySQL", "type": "Copy", "inputs": [ { "referenceName": "<MySQL input dataset name>", "type": "DatasetReference" } ], "outputs": [ { "referenceName": "<output dataset name>", "type": "DatasetReference" } ], "typeProperties": { "source": { "type": "RelationalSource", "query": "SELECT * FROM MyTable" }, "sink": { "type": "<sink type>" } } } ] Data type mapping for MySQL When copying data from MySQL, the following mappings are used from MySQL data types to Azure Data Factory interim data types. See Schema and data type mappings to learn about how copy activity maps the source schema and data type to the sink..
https://docs.microsoft.com/en-us/azure/data-factory/connector-mysql
2019-02-16T05:41:34
CC-MAIN-2019-09
1550247479885.8
[]
docs.microsoft.com
apiVersion: "v1" kind: "BuildConfig" metadata: name: "sample-build" spec: resources: limits: cpu: "100m" (1) memory: "256Mi" (2)) A resources section set with an explicit requests: resources: requests: (1) cpu: "100m" memory: "256Mi" A limit range defined in your project, where the defaults from the LimitRange object apply to pods created during the build process. Otherwise, build pod creation will fail, citing a failure to satisfy quota. OKD. The following example shows the part of a BuildConfig specifying completionDeadlineSeconds field for 30 minutes: spec: completionDeadlineSeconds: 1800.
https://docs.okd.io/3.6/dev_guide/builds/advanced_build_operations.html
2019-02-16T05:40:25
CC-MAIN-2019-09
1550247479885.8
[]
docs.okd.io
Contents IT Service Management Previous Topic Next Topic Configure ability to copy a change request Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Configure ability to copy a change request You can configure the ability to copy a change request record and its details using system properties. Before you beginRole required: admin About this taskYou can configure the following functionality. Disable the ability to copy a change request. Disable the ability to copy attachments. Determine the components of the source change request that are copied. Procedure Navigate to Change > Administration > Change Properties. Some properties are found by entering sys_properties.list in the application navigator, as noted. Set the following properties as desired. OptionDescription Disable the ability to copy a change request Set Enable Copy Change feature (com.snc.change_request.enable_copy) to false. Disable the ability to copy an attachment Set Copy attachments from originating change (com.snc.change_request.attach.enable_copy) to false. Disable the ability to copy the attachments from the change task This system property is located in the [sys_properties] table.Set the Enable copying of attachments from the originating change's related change tasks (com.snc.change_request.rl.change_task.attach.enable_copy) system property to false.Note: If the ability to copy attachments is enabled, the attachment appears on the copy of the change request only after it is saved. Configure attributes to be copied Edit the list of values in List of attributes (comma-separated) that will be copied from the originating change (com.snc.change_request.copy.attributes) to remove or add more attributes. For example, to prevent the Assigned to attribute from being copied, remove the assigned_to value from the list of attributes in the property text box. Configure related lists to be copied This system property is located in the [sys_properties] table. The following related lists are copied by default: Affected CIs Impacted Services/CIs Change Tasks Edit the list of values in Related lists (comma-separated) that will be copied from the originating change (com.snc.change_request.copy.related_lists). For example, to stop copying the Change Tasks related list, remove the change_task value from the list of related lists in the property text box.Note: You can configure this property to control the copy functionality of the Affected CIs, Impacted Services/CIs, and Change Tasks related lists. You cannot add any other related list to this property. Configure attributes of the default related lists to be copied These system properties are located in the [sys_properties] table. Navigate to the appropriate system property for one of the default related lists to configure the attributes to copy. Table 1. System properties for related list attributes Related list System property Change Tasks com.snc.change_request.copy.rl.change_task.attributes Affected CIs com.snc.change_request.copy.rl.task_ci.attributes Impacted Services/CIs com.snc.change_request.copy.rl.task_cmdb_ci_service.attributes On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/istanbul-it-service-management/page/product/change-management/task/configure-copy-change-request.html
2019-02-16T05:55:47
CC-MAIN-2019-09
1550247479885.8
[]
docs.servicenow.com
QUILC Man Page¶ DESCRIPTION¶ The Rigetti Quil compiler, quilc, is an optimizing compiler for Quil. It takes a general Quil program along with a qubit architecture, called an ISA, and produces another Quil program that is executable on that architecture. The compiler will also attempt to optimize the program by producing fewer gates and shorter gate depths. The compiler may either be run as a server which takes requests (as is used with PyQuil), or it may be run as a batch program reading from standard input. Server Mode runs the compiler as an HTTP server, taking simple POST requests with JSON payloads which are known to the companion library pyQuil. OPTIONS¶ -S, --server - (Server Mode) Run the compiler in Server Mode. This starts an HTTP server. -?, -h, --help - Show the help message. -v, --version - Show the version. --verbose - Print what the compiler is thinking. (Warning: It thinks a lot.) --isa <string> - Compile for the qubit architecture defined by <string>, which can be either 8Q, 20Q, 16QMUX, or a path to a QPU description file. -p, --protoquil - Prescribe that the input and output must be ProtoQuil, which is Quil that is comprised of gates and measurements, with no control flow. --port <port> - (Server Mode) Run quilc in server mode on port <port>. -d, --compute-gate-depth - Print a calculated gate depth for the provided circuit as an appended Quil comment. (Requires -p.) -2, --compute-2Q-gate-depth - Print a calculated multiqubit gate depth for the provided circuit as an appended Quil comment. (Requires -p. Ignores the blacklist and whitelist.) --compute-gate-volume - Print a calculated gate volume for the provided circuit as an appended Quil comment. (Requires -p.) -r, --compute-runtime - Print a calculated estimated runtime for the provided circuit as an appended Quil comment. (Requires -p.) -f, --compute-fidelity - Print a calculated estimated compiled circuit fidelity for the provided circuit as an appended Quil comment. (Requires -p.) -u, --compute-unused-qubits - Print a list of unused qubits as an appended Quil comment. (Requires -p.) -t, --show-topological-overhead - Print the number of SWAP gates incurred for topological reasons for the provided circuit as an appended Quil comment. (Requires -p.) --gate-blacklist <gate-list> - When calculating statistics, ignore the gates present in the comma-separated list of names of <gate-list>. --gate-whitelist <gate-list> - When calculating statistics, consider only the gates present in the comma-separated list of names of <gate-list>. --time-limit <limit-ms> - (Server Mode) Limit the amount of time for a single request to approximately <limit-ms> milliseconds. By default, this value is 0, which indicates an unlimited amount of time is allowed. --without-pretty-printing - Disable pretty printing of numerical quantities (e.g., multiples of pi) in compiled output. --prefer-gate-ladders - Use gate ladders, instead of the SWAP gate, to implement long-ranged gates, when possible. -j, --json-serialize - Serialize the output of compilation as a JSON object. -s, --print-logical-schedule - Include the logically parallelized schedule in JSON output. (Requires -p.) -m, --compute-matrix-reps - Print the matrix representation of a compiled ProtoQuil program. Additionally, verify that this matrix matches the matrix representation of the input program. (Requires -p. Note that this is a very expensive operation.) --enable-state-prep-reductions - Perform program optimizations by assuming that the quantum state starts in the zero state. EXAMPLES¶ quilc --isa "8Q" < file.quil - Compile a Quil file (printing the result to stdout) for an eight qubit ring. SUPPORT¶ Contact <[email protected]>.
https://pyquil.readthedocs.io/en/v2.2.1/quilc-man.html
2019-02-16T05:41:39
CC-MAIN-2019-09
1550247479885.8
[]
pyquil.readthedocs.io
Making an Amazon EBS Volume Available for Use After you attach an Amazon EBS volume to your instance, it is exposed as a block device.. Note that you can take snapshots of your EBS volume for backup purposes or to use as a baseline when you create another volume. For more information, see Amazon EBS Snapshots. Making the Volume Available on Linux Use the following procedure to make the volume available. Note that you can get directions for volumes on a Windows instance from Making the Volume Available on Windows in the Amazon EC2 User Guide for Windows Instances. To make an EBS volume available for use on Linux Connect to your instance using SSH. For more information, see Step 2: Connect to Your Instance. Depending on the block device driver of the kernel, the device might be attached with a different name than what you specify. For example, if you specify a device name of /dev/sdh, your device might be renamed /dev/xvdhor /dev/hdhby the kernel; in most cases, the trailing letter remains the same. In some versions of Red Hat Enterprise Linux (and its variants, such as CentOS), even the trailing letter might also change (where /dev/sdacould become /dev/xvde). In these cases, each device name trailing letter is incremented the same number of times. For example, /dev/sdbwould become /dev/xvdfand /dev/sdcwould become /dev/xvdg. Amazon Linux AMIs create a symbolic link with the name you specify at launch that points to the renamed device path, but other AMIs might behave differently. Use the lsblk command to view your available disk devices and their mount points (if applicable) to help you determine the correct device name to use. [ec2-user ~]$ lsblkNAMEis mounted as the root device (note the MOUNTPOINTis listed as /, the root of the Linux file system hierarchy), and /dev/xvdf sudo file -s devicecommand to list special information, such as file system type. [ec2-user ~]$ sudo file -s /dev/xvdf/dev/xvdf: data If the output of the previous command shows simply datafor the device, then there is no file system on the device and you need to create one. You can go on to Step 4. If you run this command on a device that contains a file system, then your output will be different. [ec2-user ~]$ sudo file -s /dev/xvda1/dev/xvda1: Linux rev 1.0 ext4 filesystem data, UUID=1701d228-e1bd-4094-a14c-8c64d6819362 (needs journal recovery) (extents) (large files) (huge files) In the previous example, the device contains Linux rev 1.0 ext4 filesystem data, so this volume does not need a file system created (you can skip Step 4 if your output shows file system data). (Conditional) Use the following command to create an ext4 file system on the volume. Substitute the device name (such as /dev/xvdf) for device_name. Depending on the requirements of your application or the limitations of your operating system, you can choose a different file system type, such as ext3 or XFS. Caution This step assumes that you're mounting an empty volume. If you're mounting a volume that already has data on it (for example, a volume that was restored from a snapshot), don't use mkfs before mounting the volume (skip to the next step instead). Otherwise, you'll format the volume and delete the existing data. [ec2-user ~]$ sudo mkfs -t ext4 device_name Use the following command to create a mount point directory for the volume. The mount point is where the volume is located in the file system tree and where you read and write files to after you mount the volume. Substitute a location for mount_point, such as /data. [ec2-user ~]$ sudo mkdir mount_point Use the following command to mount the volume at the location you just created. [ec2-user ~]$ sudo mount device_name mount_point (Optional) To mount this EBS volume on every system reboot, add an entry for the device to the /etc/fstabfile. Create a backup of your /etc/fstabfile that you can use if you accidentally destroy or delete this file while you are editing it. [ec2-user ~]$ sudo cp /etc/fstab /etc/fstab.orig Open the /etc/fstabfile using any text editor, such as nano or vim. Note You need to open the file as rootor by using the sudo command. Add a new line to the end of the file for your volume using the followingentries,. By using the UUID, you reduce the chances of the block-device mapping in /etc/fstableaving the system unbootable after a hardware reconfiguration. To find the UUID of a device, first display the available devices: [ec2-user ~]$ df This yields a list such as the following: Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda1 8123812 1876888 6146676 24% / devtmpfs 500712 56 500656 1% /dev tmpfs 509724 0 509724 0% /dev/shm Next, continuing this example, examine the output of either of two commands to find the UUID of /dev/xvda1: file -s /dev/xvda1 l s -al /dev/disk/by-uuid/ Assuming that you find /dev/xvda1to have UUID de9a1ccd-a2dd-44f1-8be8-0123456abcdef, you would add the following entry to /etc/fstabto mount an ext4 file system at mount point /data: UUID=de9a1ccd-a2dd-44f1-8be8-2d4275cb85a3 /data ext4 defaults,nofail 0 2 Note If you ever intend to boot your instance without this volume attached (for example, so this volume could move back and forth between different instances), you should add the nofailmount option that allows the instance to boot even if there are errors in mounting the volume. Debian derivatives, such as Ubuntu, must also add the nobootwaitmount option. After you've added the new entry to /etc/fstab, you need to check that your entry works. Run the sudo mount -a command to mount all file systems in /etc/fstab. [ec2-user ~]$ sudo mount -a If the previous command does not produce an error, then your /etc/fstabfile is OK and your file system will mount automatically at the next boot. If the command does produce any errors, examine the errors and try to correct your /etc/fstab. Warning Errors in the /etc/fstabfile can render a system unbootable. Do not shut down a system that has errors in the /etc/fstabfile. (Optional) If you are unsure how to correct /etc/fstaberrors, you can always restore your backup /etc/fstabfile with the following command. [ec2-user ~]$ sudo mv /etc/fstab.orig /etc/fstab Review the file permissions of your new volume mount to make sure that your users and applications can write to the volume. For more information about file permissions, see File security at The Linux Documentation Project.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html
2016-12-02T19:50:15
CC-MAIN-2016-50
1480698540563.83
[]
docs.aws.amazon.com
. On “>”. argument list racket, racket/class, and drracket “!”..
http://docs.racket-lang.org/drracket/Keyboard_Shortcuts.html?q=keybinding
2014-03-07T09:00:24
CC-MAIN-2014-10
1393999639602
[]
docs.racket-lang.org
What? Well, Aether is the answer. It's a standalone library to resolve, install and deploy artifacts the Maven way. Getting Aether If you would like to take Aether for a test drive, these are the XML bits for your POM to get it onto your class path: <project> ... <properties> <aetherVersion>1.12</aetherVersion> <mavenVersion>3.0.3</mavenVersion> <wagonVersion>1.0-beta-7</wagonVersion> </properties> ... <dependencies> <dependency> <groupId>org.sonatype.aether</groupId> <artifactId>aether-api</artifactId> <version>${aetherVersion}</version> </dependency> <dependency> <groupId>org.sonatype.aether</groupId> <artifactId>aether-util</artifactId> <version>${aetherVersion}</version> </dependency> <dependency> <groupId>org.sonatype.aether</groupId> <artifactId>aether-impl</artifactId> <version>${aetherVersion}</version> </dependency> <dependency> <groupId>org.sonatype.aether</groupId> <artifactId>aether-connector-file</artifactId> <version>${aetherVersion}</version> </dependency> <dependency> <groupId>org.sonatype.aether</groupId> <artifactId>aether-connector-asynchttpclient</artifactId> <version>${aetherVersion}</version> </dependency> <dependency> <groupId>org.sonatype.aether</groupId> <artifactId>aether-connector-wagon</artifactId> <version>${aetherVersion}</version> </dependency> <dependency> <groupId>org.apache.maven</groupId> <artifactId>maven-aether-provider</artifactId> <version>${mavenVersion}</version> </dependency> <dependency> <groupId>org.apache.maven.wagon</groupId> <artifactId>wagon-ssh</artifactId> <version>${wagonVersion}</version> </dependency> </dependencies> ... </project> Let's have a closer look at these dependencies and what they are used for: - aether-api This JAR contains the application programming interfaces that clients of Aether use. The entry point to the system is org.sonatype.aether.RepositorySystem. - aether-util Here are various utility classes and ready-made components of the repository system collected. - aether-impl This archive hosts many actual implementation classes of the repository system. Unless one intents to customize internals of the system or needs to manually wire it together, clients should not access any classes from this JAR directly. - aether-connector-file The transport layer that handles the uploads/downloads of artifacts is realized by so called repository connectors. This particular connector adds support for transfers to and from file: URLs. - aether-connector-asynchttpclient This connector enables access to http: and https: based repositories. - aether-connector-wagon This connector is based on Maven Wagon and can employ any existing Wagon providers. - maven-aether-provider This dependency provides the pieces to employ Maven POMs as artifact descriptors and extract dependency information from them. Furthermore, it provides the handling of the other metadata files used in a Maven repository. - wagon-ssh This dependency adds support for tranfers using the scp: and sftp: schemes. Its inclusion in the above POM snippet is merely a suggestion, use whatever Wagon providers fit your needs. Note: Aether targets Java 1.5+ so be sure to set your compiler settings accordingly. Setting Aether Up Aether's implementation consists of a bunch of components that need to be wired together to get a complete repository system. To do so, one can still use a Plexus container (or any other IoC container that supports Plexus components like Sisu). For instance, assuming you add a dependency on org.codehaus.plexus:plexus-container-default:1.5.5 to your POM the following code fragment discovers the various Aether components from the thread context class loader and wires the system together: import org.codehaus.plexus.DefaultPlexusContainer; import org.sonatype.aether.RepositorySystem; ... private static RepositorySystem newRepositorySystem() throws Exception { return new DefaultPlexusContainer().lookup( RepositorySystem.class ); } ... To get an instance of the repository system without an IoC container, the classes from the default implementation of Aether provide no-arg constructors and setters to manually wire the components together. Since this isn't really fun to code, the repository system additionally supports wiring via a lightweight service locator pattern. This service locator infrastructure consists of merely two small interfaces, namely org.sonatype.aether.spi.locator.Service and org.sonatype.aether.spi.locator.ServiceLocator. The components themselves implement the Service interface, and the client of the repository system provides to them an implementation of the ServiceLocator to query other components from. The module maven-aether-provider provides a simple service locator whose usage looks like shown below: import org.apache.maven.repository.internal.DefaultServiceLocator; import org.sonatype.aether.connector.wagon.WagonProvider; import org.sonatype.aether.connector.wagon.WagonRepositoryConnectorFactory; ... private static RepositorySystem newRepositorySystem() { DefaultServiceLocator locator = new DefaultServ. For the future, it's intended to provide an additional service locator implementation that is capable of discovering components automatically from the class path. Creating a Repository System Session Aether and its components are designed to be stateless and as such all configuration close cooperation with stock Maven and want to read configuration from the user's settings.xml, you should have a look at the library org.apache.maven:maven-settings-builder which provides the necessary bits. Resolving Dependencies Extending the previous code snippets, the snippet below demonstrates how to actually resolve the transitive dependencies of org.apache.maven:maven-profile:2.2.1 in this example. The module aether-demo from the Aether source repository provides a more extensive demonstration of Aether and its use, so feel free to play with that. To learn about how you can control the calculation of transitive dependencies, please see the article about the Introduction. Aether inside Maven Aether is not just another way to do Maven-like dependency resolution, it is the library that Maven 3 actually integrates for all the dependency related work. So if you are curious how to use Aether from your Maven plugin, check out Using Aether in Maven Plugins. Labels Page: Index Page: Sitemap Sep 16, 2010 Anonymous says:Hi, The sniplet to initialize aether without plexus, seems to be wrong. I can't... Hi, The sniplet to initialize aether without plexus, seems to be wrong. I can't locate ManualWagonProvider class (not found in aether-impl-1.4, aether-connector-wagon-1.4, maven-aether-provider). Sep 20, 2010 Luke Patterson says:the snippets are great it would be awesome though to have a sample project that... the snippets are great it would be awesome though to have a sample project that I can just check out and start messing around with, preferably one that uses SPI component lookup Sep 27, 2010 Baptiste MATHUS says:Hi, ManualWagonProvider:... Hi, ManualWagonProvider: Demo project: Cheers Dec 10, 2010 Anonymous says:The snippets don't compile. DefaultServiceLocator#setServices() expects a... The snippets don't compile. DefaultServiceLocator#setServices() expects an Object array as second argument. When fixing that one, an exception is thrown when running the example:) at be.argenta.AehterMain.newRepositorySystem(AehterMain.java:54) at be.argenta.AehterMain.main(AehterMain.java:20) Exception in thread "main" java.lang.NullPointerException at be.argenta.AehterMain.newSession(AehterMain.java:44) at be.argenta.AehterMain.main(AehterMain.java:22)) Dec 10, 2010 Benjamin Bentmann says:DefaultServiceLocator#setServices() expects an Object array as second argument. ... Actually, the second argument is a varargs so be sure to use Java 1.5+. Be sure to use the right service locator, i.e. org.apache.maven.repository.internal.DefaultServiceLocator. Dec 13, 2010 Anonymous says:Weird, I was using Java 6, got no other jdk installed. Ah yes, I was using ... Weird, I was using Java 6, got no other jdk installed. Ah yes, I was using org.sonatype.aether.impl.internal.DefaultServiceLocator, thanks Dec 12, 2010 Anonymous says:thank you for you valuable post. diagnostic scanner thank you for you valuable post. diagnostic scanner Dec 14, 2010 Anonymous says:Hey Benjamin, I am currently working on a project dealing with m2 repositories,... Hey Benjamin, I am currently working on a project dealing with m2 repositories, and I. Does it simply convert a group id to a file path and hope for the best? Or does maven actually have the mechanism to keep track of where an artifact is stored? I thought of building a dummy framework that looks for artifacts this way, but would hate to have to maintain it if file structure changes in future maven releases. Any help is appreciated. If there is a forum or something else that this question would better be facilitated please point me in the right direction. Thanks! Daniel Johnson Dec 14, 2010 Benjamin Bentmann says:My question is whether Aether (or maybe maven itself) has the framework for loca... That sounds pretty much like what the example snippet in the section "Resolving Dependencies" does. Repositories have a layout or more generally content type as Aether calls it. This is a simple string identifier which basically denotes the convention/structure by which artifacts are organized. Yes. The Aether user list, see the readme at. Dec 20, 2010 Anonymous says:Hi How about makeing your own repository layout? Is that as easy as it was in ... Hi How about makeing your own repository layout? Is that as easy as it was in Maven2? Thanks Lucas Dec 20, 2010 Benjamin Bentmann says:How about makeing your own repository layout? That is accomplished by providing ... That is accomplished by providing your own implementation of org.sonatype.aether.spi.connector.RepositoryConnectorFactory (see aether-spi module). Jan 04, 2011 Anonymous says:When will the Aether Ant Tasks be released? I am currently making extensiv... When will the Aether Ant Tasks be released? I am currently making extensive use of the Maven Ant Tasks which do not support Maven 3. Mar 14, 2011 Yegor Bugayenko says:Would be nice to know how to integrate Aether with slf4j or log4j (or any other ... Would be nice to know how to integrate Aether with slf4j or log4j (or any other logging facility). Apr 14, 2011 Anonymous says:I believe there is a typo in the example. The dependency shows <artifa... I believe there is a typo in the example. The dependency shows It should be Jun 01, 2011 Anonymous says:I am currently working on a project dealing with m2 repositories, and Iipad2 Cab... I am currently working on a project dealing with m2 repositories, and Iipad2 Cables. Jun 24, 2011 Anonymous says:Hi, I am using aether to query the maven repository. I need to get the age of an... Hi, I am using aether to query the maven repository. I need to get the age of an artifact. The repository is artifactory and the age indicates time since that artifact was deployed on the repository. I did not see any API in aether to get this...is there anyway I can get this information using the aether APIs? Jun 24, 2011 Benjamin Bentmann says:No, Aether and the Maven repository format in general do not track the upload ti... No, Aether and the Maven repository format in general do not track the upload time of an artifact. Jul 13, 2011 Anonymous says:Hi! I'm working on a tool at work that is using Aether to resolve arti... Hi! I'm working on a tool at work that is using Aether to resolve artifacts from our remote repo, so that we can launch them. We're giving the users a combo box with a list of available versions of software to launch, and we're populating it by using a VersionRangeRequest. Problem is, it seems to be favoring the local metadata cache instead of the remote metadata, and we're not seeing new versions as they are deployed into the remote repo. The user has to blow away their local repo to see new versions. Any gotchas I should know about here? Jul 13, 2011 Benjamin Bentmann says:The local repo is a cache, so what you describe is expected behavior. To enforce... The local repo is a cache, so what you describe is expected behavior. To enforce refetching from the remote repo at the time the VersionRangeRequest is made, you can set the update policy on the RemoteRepository or the repo session to "always". Feb 13, 2013 Anonymous says:I'm having issues with programmatic deployment of artifact(s) to the Nexus repo.... I'm having issues with programmatic deployment of artifact(s) to the Nexus repo. I'm using code that is provided here and in. Exception that I am getting is java.lang.RuntimeException: org.sonatype.aether.deployment.DeploymentException: Failed to deploy artifacts/metadata: No connector available to access repository repoId (_) of type using the available factories WagonRepositoryConnectorFactory, FileRepositoryConnectorFactory... Protocol used to connect with repo is file:// and as per maven documentation it is supported by default, without the need for Wagon connector usage. Please advise what to do. Feb 13, 2013 Benjamin Bentmann says:No connector available to access repository repoId (_) of t... Your RemoteRepository appears to have an empty content type, you want to set "default" for a Maven-2 layout. Feb 13, 2013 Anonymous says:Thanks Benjamin! Issue is solved now. Just one more question if you don't m... Thanks Benjamin! Issue is solved now. Just one more question if you don't mind. Is there a way to mimic "generatePom=true" feature of maven's deploy:deploy-file plugin? Thanks Feb 13, 2013 Benjamin Bentmann says:Is there a way to mimic "generatePom=true" feature of maven's deploy:deploy-file... No, but you can easily implement this yourself, the output of "generatePom=true" is not more than a string template with some coordinates inserted. If your artifacts have dependencies on their own, you might however want to consider crafting a proper POM for them, artifacts without proper dependency declarations tend to cause frustration down the road. For any further questions, please use the support infrastructure at Eclipse as indicated at the top of this page. Add Comment
https://docs.sonatype.org/display/AETHER/Home?focusedCommentId=8519778
2014-04-16T10:19:40
CC-MAIN-2014-15
1397609523265.25
[]
docs.sonatype.org
>. For more information on UMLGraph, see the UMLGraph site and this page. Known Plugins That Provide Reports - Project-Info-Reports - Standard Reports - Javadoc - Javadoc - JXR - Source Cross-Reference - Surefire-Report - Surefire Test Report - Changes - Changes & JIRA Report - Taglist - Taglist Report (For TODO, @deprecated, etc.) - PMD - PMD Report for Source-code analysis
http://docs.codehaus.org/display/MAVENUSER/Reporting+Plugins
2014-04-16T10:50:56
CC-MAIN-2014-15
1397609523265.25
[]
docs.codehaus.org
Hello Veera You changed the headlines in Customising the Beez template. Are you sure, that the modules headlines are h1? In my standard beez template they're h3 Same for the sub headline in the poll module Joomla! is used for?. It's in my beez template h4 -- Bembelimen 13:26, 18 December 2008 (UTC)
http://docs.joomla.org/index.php?title=User_talk:Veera&oldid=12253
2014-04-16T10:12:45
CC-MAIN-2014-15
1397609523265.25
[]
docs.joomla.org
CloudBees Jenkins Platform 2.190.2.2 On this page RELEASED: Public: 2019-10-29 Based on Jenkins LTS 2.190.2-cb-5 Rolling release New features Added CloudBees Analytics Plugin version 1.2 Added Kubernetes Client API Plugin version 4.6.0-1 Added Trilead API Plugin version 1.0.4 Resolved issues Operations Center Security realm issue (CTR-600) When the authorization strategy on Operations Center was not RBAC (Role Based Access Control), Operation Center’s SSO (single sign-on) was not functioning properly, even when the user was granted access to the master. Instead, after creating a team, users were redirected to the Team Master login page. With this fix, Operations Center correctly propagates the security realm to the master even when RBAC is not the authorization strategy.
https://docs.cloudbees.com/docs/release-notes/latest/cloudbees-jenkins-platform/2.190.2.2
2020-10-19T22:07:18
CC-MAIN-2020-45
1603107866404.1
[]
docs.cloudbees.com
Importing a new Spreadsheet Index After exporting data from Prisms the “CoE and student data export”, you can load the updates and changes into eBECAS. Initially we suggest exporting all data, then load updates each week. You should load the data in order oldest period first. You do not need to repeat data load for past date periods. To load the exported data into eBECAS – Go to eBECAS – Main – Utilities – PRISMS Import and Select New from the Options to begin a new import Import File The PRISMS file should be in Excel format, or CSV (text). Ensure the Header on Line and Excel Sheet Name (for Excel imports only) are specified correctly. Generally, the default values should work OK. Click Import Excel File and browse for the file to Import Link Fields Ensure Columns to be imported match a column from the import in the drop down. For a default import, these should match by default. Click Confirm Links to continue Process Import Review the Import Notes, showing Filename, Records Loaded, and details of the PRISMS export. Click Process File to import into eBECAS. Once processed, confirm the import. eBECAS will return to the Main PRISMS Import form and load the import, ready to update COE and VISA details. Update loaded temporary file with references and update COE and VISA details Select an import from the drop down and click Load Import in the Options to view the file details and update eBECAS. This loads the temporary file with references, details of the import are shown in the grid, along with checkboxes to indicate a matching record was found in eBECAS for the Student, Course, Enrolment, CoE and Visa. The data has not yet be updated into the eBecas database! To update and create COE and Visa records based on the import, select records to update by checking the Update Checkbox (or Select All). The update of the CoE and Visa records are 2 separate actions. Click Update CoE records, when this is finished processing, you need to select All again, then select Update Visa.
https://docs.ebecas.com.au/prisms-import/
2020-10-19T20:42:36
CC-MAIN-2020-45
1603107866404.1
[]
docs.ebecas.com.au
Changelog¶ 0.4.0 (2017-06-26)¶ - Added new chaos functionality. When used, blockade will randomly select containers to impair with partitions, slow network, etc. Contributed by John Bresnahan (@buzztroll) of Stardog Union. - Added an event trail that logs all blockade events that are run against a blockade over its lifetime. This can be helpful in correlating blockade events to application errors. Contributed by John Bresnahan (@buzztroll) of Stardog Union. - Substantially improved overall performance by using a cached container for all host commands. - #62: Fixed bug with using blockade commands against a restarted container. Contributed by John Bresnahan (@buzztroll) of Stardog Union. - Updated Docker SDK and API version. Contributed by Vladimir Borodin (dev1ant). 0.3.1 (2016-12-09)¶ - #43: Restore support for loading from blockade.ymlconfig file. - #26: Improved error messages when running blockade without access to the Docker API. - #25: Improved error messages when determining container host network device fails. - #40: Fixed killcommand (broken in 0.3.0). - #1: Fixed support for configuring Docker API via DOCKER_HOSTenv. - #36: Truncate long blockade IDs to avoid iptables limits. - Switched to directly inspecting /sysfor container network devices instead of via ip. This means containers no longer need to have ipinstalled. - Improved Blockade Python API by returning names of the containers a command has operated on. Contributed by Gregor Uhlenheuer (@kongo2002). - Fixed Vagrantfileto also work on Windows. Contributed by Oresztesz Margaritisz (@gitaroktato). - Documentation fix contributed by Konrad Klocek (@kklocek). - Added new versioncommand that prints Blockade version and exits. - Added cap_addcontainer config option, for specifying additional root capabilities. Contributed by Maciej Zimnoch (@Zimnx). 0.3.0 (2016-10-29)¶ -command. -flag for many commands to allow easier randomized chaos testing. Contributed by Gregor Uhlenheuer (@kongo2002). - Introduces a new killcommand for killing containers in a blockade. - Fixed links to Docker documentation. Contributed by @joepadmiraal. - Fixed links of named containers. Contributed by Gregor Uhlenheuer (@kongo2002). 0.2.0 (2015-12-23)¶ - command, which causes some packets to a container to be duplicated. Contributed by Gregor Uhlenheuer. - Introduces new start, stop, and restartcommands, which manage specified containers via Docker. Contributed By Gregor Uhlenheuer. - Introduces new random partition behavior: blockade partition --randomwill create zero or more random partitions. Contributed By Gregor Uhlenheuer. - Reworked the blockade ID generation to be more like docker-compose, instead of using randomly-generated IDs. If --name)¶ - #6: Change portsconfig keyword to match docker usage. It now publishes a container port to the host. The exposeconfig keyword now offers the previous behavior of ports: it makes a port available from the container, for linking to other containers. Thanks to Simon Bahuchet for the contribution. - #9: Fix logs command for Python 3. - Updated dependencies.
https://blockade.readthedocs.io/en/latest/changes.html
2020-10-19T20:47:24
CC-MAIN-2020-45
1603107866404.1
[]
blockade.readthedocs.io
In the event of a failure, SPS has two methods of recovery: local recovery and inter-server recovery. If local recovery fails, a “failover” is implemented. A failover is defined as automatic switching to a backup server upon the failure or abnormal termination of the previously active application, server, system, hardware component or network. Failover and switchover are essentially the same operation, except that failover is automatic and usually operates without warning, while switchover requires human intervention. This automatic failover can occur for a number of reasons. Below is a list of the most common examples of an SPS initiated failover. Server Level Causes Server Failure SPS has a built-in heartbeat signal that periodically notifies each server in the configuration that its paired server is operating. A failure is detected if a server fails to receive the heartbeat message. - Primary server loses power or is turned off. - CPU Usage caused by excessive load — Under very heavy I/O loads, delays and low memory conditions can cause system to become unresponsive such that SPS may detect a server as down and initiate a failover. - Quorum/Witness – As part of the I/O fencing mechanism of quorum/witness, when a primary server loses quorum, a “fastboot”:, “fastkill” or “osu” is performed (based on settings) and a failover is initiated. When determining when to fail over, the witness server. Communication Failures/Network Failures SPS sends the heartbeat between servers every five seconds. If a communication problem causes the heartbeat to skip two beats but it resumes on the third heartbeat, SPS takes no action. However, if the communication path remains dead for three beats, SPS will label that communication path as dead but will initiate a failover only if the redundant communication path is also dead. - Network connection to the primary server is lost. - Network latency. - Heavy network traffic on a TCP comm path can result in unexpected behavior, including false failovers and LifeKeeper initialization problems. - Using STONITH, when SPS detects a communication failure with a node, that node will be powered off and a failover will occur. - Failed NIC. - Failed network switch. - Manually pulling/removing network connectivity. Split-Brain If a single comm path is used and the comm path fails, then SPS hierarchies may try to come into service on multiple systems simultaneously. This is known as a false failover or a “split-brain” scenario. In the “split-brain” scenario, each server believes it is in control of the application and thus may try to access and write data to the shared storage device. To resolve the split-brain scenario, SPS may cause servers to be powered off or rebooted or leave hierarchies out-of-service to assure data integrity on all shared data. Additionally, heavy network traffic on a TCP comm path can result in unexpected behavior, including false failovers and the failure of LifeKeeper to initialize properly. The following are scenarios that can cause split-brain: - Any of the comm failures listed above - Improper shutdown of LifeKeeper - Server resource starvation - Losing all network paths - DNS or other network glitch - System lockup/thaw Resource Level Causes SPS is designed to monitor individual applications and groups of related applications, periodically performing local recoveries or notifications when protected applications fail. Related applications, by example, are hierarchies where the primary application depends on lower-level storage or network resources. SPS monitors the status and health of these protected resources. If the resource is determined to be in a failed state, an attempt will be made to restore the resource or application on the current system (in-service node) without external intervention. If this local recovery fails, a resource failover will be initiated. Application Failure - An application failure is detected, but the local recovery process fails. - Remove Failure – During the resource failover process, certain resources need to be removed from service on the primary server and then brought into service on the selected backup server to provide full functionality of the critical applications. If this remove process fails, a reboot of the primary server will be performed resulting in a complete server failover. Examples of remove failures: - Unable to unmount file system - Unable to shut down protected application (oracle, mysql, postgres, etc) File System - Disk Full — SPS’s File System Health Monitoring can detect disk full file system conditions which may result in failover of the file system resource. - Unmounted or Improperly Mounted File System — User manually unmounts or changes options on an in-service and LK protected file system. - Remount Failure — The following is a list of common causes for remount failure which would lead to a failover: - corrupted file system (fsck failure) - failure to create mount point directory - mount point is busy - mount failure - SPS internal error IP Address Failure When a failure of an IP address is detected by the IP Recovery Kit, the resulting failure triggers the execution of the IP local recovery script. SPS first attempts to bring the IP address back in service on the current network interface. If the local recovery attempt fails, SPS will perform a failover of the IP address and all dependent resources to a backup server. During failover, the remove process will un-configure the IP address on the current server so that it can be configured on the backup server. Failure of this remove process will cause the system to reboot. - IP conflict - IP collision - DNS resolution failure - NIC or Switch Failures Reservation Conflict - A reservation to a protected device is lost or stolen - Unable to regain reservation or control of a protected resource device (caused by manual user intervention, HBA or switch failure) SCSI Device - Protected SCSI device could not be opened. The device may be failing or may have been removed from the system. Post your comment on this topic.
http://docs.us.sios.com/spslinux/9.4.0/en/topic/common-causes-of-an-sps-initiated-failover
2020-10-19T21:17:08
CC-MAIN-2020-45
1603107866404.1
[]
docs.us.sios.com
Budibase Core is a library used by Budibase Apps, Budibase Server and Budibase Builder. It defines and implements all Core APIs used in Budibase. Its main purpose is to construct and to understand the "Budibase App Definition". Here is a quick overview of the usage of Budibase Core: Budibase Builder is used to create an "App Definition" - a set of JSON files that defines everything about an app built with Budibase. The builder is really just a User Interface that wraps Budibase Core. Budibase Core defines the APIs used to construct an App Definition. Budibase Web provides and HTTP API for your web app to use. All HTTP Endpoints and behaviours are defined by the App Definition file. Budibase Core implements the behaviour of these endpoints (except for any custom endpoints). Budibase Web is really just an HTTP wrapper around Budibase Core. Your Budibase App's front end presents your app's data, and makes HTTP requests to Budibase Web. Budibase Core is used in your app to understand the App Definition. It allows record/field binding, running validation rules, knowledge of security levels. A record is the fundamental unit of data in Budibase. Records are stored as JSON. A record has a schema, that is defined in the app definition. The record's schema will define: Fields. Every field must be declared, and will consist of Name. e.g. "last_name". This is the name of the member, as stored in JSON Label. e.g. "Last Name". What the field will be labelled as in the user interface Type. e.g. "string". Types are listed below TypeOptions. See types below InitialValue. e.g. "(unknown)". The initial value on a new record DefaultValue. e.g. "(not set)". The value taken if the loaded JSON object is missing this field Validation Rules. A set of rules, written in javascript, which are run when a record is created or updated Collection Name. This defines the "key" (and therefore URL) of the record. e.g. "customers", may cause a record to have a key of "/customers/0-6shd8uu"" Record Node Id. Each record type will have its own unique integer. this is used to for the record's key. e.g. "/customer/0-6shd8uu". Collection Sharding. This defines how a collection of records of this type are stored. At the storage level, all records belong to a "folder". Sharding enables records to be stored across multiple folders, to aid scalability Children. A record may have child records. E.g. "Invoice" may belong to "Customer". See "Hierarchy" below Indexes. Used to keep a retrievable list of the record's decendants (children, grandchildren etc...). See "Indexes" below A Budibase record will always have a "key" member. The key is used to determine the record's type, and thus its schema. Hierarchy Your database schemas are organised in a tree structure. Each node, except for the root, is a record node. Below shows an example hierarchy: In this example, the "customer invoice" node is a child of "customer". Thus, records of this type will always have a key in the format: /customers/{parent customer id}/invoices/{invoice id} An index is the only way to retrieve collections of records, i.e. a JSON array of records. When a record is created, updated or deleted, an index is updated. Indexes may belong to the root node, or to any record node. An index is defined with the following properties: name. This will define the key of the index, i.e. the path used to access it. For example, "all_invoices" index on the customer node will have a key in the format "/customers/{customer-id}/all_invoices". Required map. As in "map-reduce". Javascript code, used to determine which fields are selected from the record, into the index. Not required - defaults to all fields filter. A Javascript function, which should return a bool. Determines whether a record should b included in the index. Not required allowedRecordNodeIds. An array of record node IDs to include in the index. Required An index may be either be of type "hierarchal" or "reference". Hierarchal Index. When a record is changed, Budibase Core will search for all hierarchal indexes (type='hierarchy') that belong to an ancestor of the record. For each found it then applies the following rules: Check if record's node ID is included in the index's allowedRecordNodeIds Run the filter method Determine if index should be updated: If creating and filter returns true: add to index if deleting and filter returns true: remove from index if updating, filter returns true, filter previously returned true & mapped record has changed: update in index if updating, filter returns false & filter previously returned true: remove from index if updating, filter returns true & filter previously returned false: add to index Reference Index A reference index is used when a record has a field of type reference, i.e. when one record references another record. If record A references record B, then record B will hold an index, which includes record A, and any other records that are referencing it. The datastore is a Budibase abstraction, describing a set of methods used to read and write data to the end storage mechanism. The actual implementation of the datastore is passed to the Budibase Core, from whatever is using the library. In general, the Budibase datastore is a key-value store. Any store of data that allows for a "key" (id) and a "value" (json data) can have a Budibase datastore written. In Budibase Core the "key" is referred to as "key", and the "value" is referred to as either: "File": When data (e.g. a record or index) is being stored "Folder": Used to keep a list of files in this namespace (i.e. just like a folder or directory in a file system) Some examples of potential datastore implementations: In Memory. The Budibase Core test suite uses a a JSON object as it's datastore. Each member of the JSON object represents the key and value File system. File/Folder path = key, File/Folder contents = value Redis. Is a key value store MongoDb. Keys and Documents Dropbox. Same as filesystem Behaviours are how we write custom backend code for Budibase. A behaviour Is a javascript function, that takes one argument Lives inside a javascript module. We refer to this module as a "Behaviour Source" Behaviour sources are passed into Budibase Core when the library is initialised Your behaviour source will be imported into your Budibase backend. Actions are how you integrate custom backend code into your Budibase application. Actions are used to run behaviours. Actions have the following properties: Name. Should be unique. It is how your action is identified and called. In Budibase Web, your action will be callable via a url in the format Behaviour Source. The name of the javascript module that the behaviour resides in Behaviour Name. The name of the behaviour to run Initial Options. A default argument for the behaviour. The caller of the action can supply partially completed arguments, which will be completed by these Initial Options Triggers are used to automatically run actions, after certain application events. A full list of applications events can be found in Triggers have the following properties: Action Name. The action to run Event Name. The event that will trigger the action Condition. A javascript expression, used to determine whether the action should be run or not Options Creator. A javascript expression used to create the behaviour "options" i.e. the argument to the behaviour On the web server: Validates incoming web requests, using schema and rules defined in the application definition Authenticates and authorizes incoming requests, using access levels defined in the app definition Stores records Handles record retrieval from storage Manages indexing of records, on create, update and delete. Indexes are defined in the app definition Manages users and their levels of access. Users are stored in storage. Allowed access levels are defined in the app definition. In the application frontend (browser): Automatically binds UI controls to records, using schema defined in the app definition Disables/Enables features and actions in the UI, based on a user's access levels Knows which fields are available on indexes, for searching and displaying of collections of data records When a record is created/updated/deleted, knows which indexes should change, and can update views accordingly without having to refetch an index. Budibase Core is divided into five APIs. Record API. For all CRUD operations on records Collection API. For iterating through all records in a collection (mainly used for rebuilding indexes) Template API. For constucting the Budibase App Definition
https://docs.budibase.com/technical/budibase-core
2020-10-19T21:57:45
CC-MAIN-2020-45
1603107866404.1
[]
docs.budibase.com
SendReach, you need to follow the given steps. I. In SendReach i. Log in to your SendReach account and navigate to Mailing Lists > Lists ii. Select the List you wish to add a custom field in. iii. Click on Custom Fields (Manage) iv. Manage the fields and their details II. In ConvertPlug i. Create and Design a Module. Open it in the Editor. Click on Form Designer ii. Add a New field – Make sure that name of the field is the same as the tag you used for the field in SendReach. iii. Save and Publish For More information about Custom Fields in SendReach, click here.
https://docs.brainstormforce.com/how-to-sync-multiple-fields-data-with-sendreach/
2020-10-19T22:03:24
CC-MAIN-2020-45
1603107866404.1
[array(['https://docs.brainstormforce.com/wp-content/uploads/2016/02/mailing-lists.jpg', 'Open your Lists in SendReach'], dtype=object) array(['https://docs.brainstormforce.com/wp-content/uploads/2016/02/monthly-offer.jpg', 'Select the List you wish to add a Custom Field to'], dtype=object) array(['https://docs.brainstormforce.com/wp-content/uploads/2016/02/custom-fields-2.jpg', 'Select Custom Fields in SendReach'], dtype=object) array(['https://docs.brainstormforce.com/wp-content/uploads/2016/02/Tag-for-Custom-Field-in-SendReach.jpg', 'Tag for Custom Field in SendReach'], dtype=object) array(['https://docs.brainstormforce.com/wp-content/uploads/2016/02/form-designer-8.jpg', 'Select Form Designer in ConvertPlug Editor'], dtype=object) array(['https://docs.brainstormforce.com/wp-content/uploads/2016/02/field-name-6.jpg', 'Add a new field through ConvertPlug Editor'], dtype=object) ]
docs.brainstormforce.com
Overview of Packet Coalescing Certain IP version 4 (IPv4) and IP version 6 (IPv6) network protocols involve the transmission of packets to broadcast or multicast addresses. These packets are received by multiple hosts in the IPv4/IPv6 subnet. In most cases, the host that receives these packets does not do anything with these packets. Therefore, the reception of these unwanted multicast or broadcast packets causes unnecessary processing and power consumption to occur within the receiving host. For example, host A sends a multicast Link-local Multicast Name Resolution (LLMNR) request on an IPv6 subnet to resolve host B's name. Except for host A, this LLMNR request is received by all hosts on the subnet. Except for host B, the TCP/IP protocol stack that runs on the other hosts inspects the packet and determines that the packet is not intended for it. Therefore, the protocol stack rejects the packet and calls NdisReturnNetBufferLists to return the packet to the miniport driver. Starting with NDIS 6.30, network adapters can support NDIS packet coalescing. By reducing the number of receive interrupts through the coalescing of random broadcast or multicast packets, the processing overhead and power consumption is significantly reduced on the system. Packet coalescing involves the following steps: Overlying drivers, such as the TCP/IP protocol stack, define NDIS receive filters that are used to screen broadcast and multicast packets. The overlying drivers download these filters to the underlying miniport driver that supports packet coalescing. Once downloaded, the miniport driver configures the network adapter with the packet coalescing receive filters. For more information about these filters, see Packet Coalescing Receive Filters. Received packets that match receive filters are cached, or coalesced, on the network adapter. The adapter does not generate a receive interrupt for coalesced packets. Instead, the adapter interrupts the host when another hardware event occurs. When this interrupt is generated, the adapter must indicate a receive event with the interrupt. This allows the network adapter to process coalesced packets that were received by the network adapter. For example, the network adapter that supports packet coalescing can generate a receive interrupt when one of the following events occur: The expiration of a hardware timer whose expiration time is set to a maximum coalescing delay value of the matching receive filter. The available space within the hardware coalescing buffer reaches an adapter-specified low-water mark. A packet is received that does not match a coalescing filter. Another interrupt event, such as a send completion event, has occurred. For more information about this process, see Handling Packet Coalescing Receive Filters. The following points apply to the support of packet coalescing by NDIS: NDIS supports packet coalescing for packets received on the default NDIS port (port 0) assigned to the physical network adapter. NDIS does not support packet coalescing on NDIS ports that are assigned to virtual network adapters. For more information, see NDIS Ports. NDIS supports packet coalescing for packets received on the default receive queue of the network adapter. This receive queue has an identifier of NDIS_DEFAULT_RECEIVE_QUEUE_ID.
https://docs.microsoft.com/en-us/windows-hardware/drivers/network/overview-of-packet-coalescing
2020-10-19T23:00:35
CC-MAIN-2020-45
1603107866404.1
[]
docs.microsoft.com
How to fix Revive Old Post not posting If Revive Old Posts (ROP) Revive Old Post using Cronjob.org: Step 1. Turn off the default WP Cron by editing the wp-config.php file (located in the root of the folder where WordPress is installed). Open the wp-config.php file, add a new line after define('WP_DEBUG', false); then add the following code on the new line: define('DISABLE_WP_CRON', true); Save your changes after you've added the new line. to bring up the files: Right-click on wp-config.php and select Editor to start editing the file. Add the following line: define('DISABLE_WP_CRON', true); Inside the file after define('WP_DEBUG', false); So it looks like below: (B e sure to save your changes when you're done editing the file and then delete the File Manager plugin if no longer needed). Once you have successfully turned off your default WP Cron, Revive Old Posts will show a warning notice that your WordPress Cron has been disabled: Step 2. Create an account on, then log into your account on the website. Step 3. Click the Create Cron Job button to create a new cron job. Step 4. Fill out the details of the Cron job. For Title enter anything you wish, for Address enter your website domain suffixed with /wp-cron.php?doing_wp_cron Example: - Change yourwebsitedomain.com to your actual website domain. - If your website does not have SSL, then change https to http Step 5. Set the Cron schedule, we recommend every 10 minutes: Step 6. Check the option to save responses so that you can see if the cron job is executed successfully, then click the Create Cronjob button. That's it! Your cron job should now be created and if you did Step 4 & 5 correctly then you should see success messages in the History section of letting you know that the cron job ran successfully. To dismiss the Revive Old Post notice regarding your WP Cron Job being disabled, simply click dismiss Method 2. The other method to fixing the issue is creating a true cron for your website. This requires you having access to your cPanel account to make the following changes. Host Gator has gone ahead and created a pretty detailed document on how to create a true cron for WordPress so we will use it. Please go here to view the document: How to create a true cron for WordPress If you are not comfortable with performing any of these methods then we highly recommend you contact your web host for assistance. They will be able to perform either method for you in no time. To dismiss the Revive Old Post notice regarding your WP Cron Job being disabled, simply click dismiss Still having issues? Contact us here: Submit Ticket
https://docs.revive.social/article/686-fix-revive-old-post-not-posting
2020-10-19T21:51:41
CC-MAIN-2020-45
1603107866404.1
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5df79d9104286364bc92e6bc/file-VX3w8IGs6v.png', None], dtype=object) ]
docs.revive.social
MXNet Elastic Inference with Java Starting from Apache MXNet version 1.4, the Java API can now integrate with Amazon Elastic Inference. You can use Elastic Inference with the following MXNet Java API operations: MXNet Java Infer API Topics Install Amazon EI Enabled Apache MXNet Amazon Elastic Inference enabled Apache MXNet is available in the AWS Deep Learning AMI. A maven repository is also available on Amazon S3 For Maven projects, Elastic Inference Java can be included by adding the following to your project's pom.xml: <repositories> <repository> <id>Amazon Elastic Inference</id> <url></url> </repository> </repositories> In addition, add the Elastic Inference flavor of MXNet as a dependency using: <dependency> <groupId>com.amazonaws.ml.mxnet</groupId> <artifactId>mxnet-full_2.11-linux-x86_64-eia</artifactId> <version>[1.4.0,)</version> </dependency> Check MXNet for Java Version You can use the commit hash number to determine which release of the Java-specific version of MXNet is installed using the following code: // Imports import org.apache.mxnet.javaapi.*; // Lines to run Version$ version$ = Version$.MODULE$; System.out.println(version$.getCommitHash()); You can then compare the commit hash with the Release Notes Use Amazon Elastic Inference with the MXNet Java Infer API To use Amazon Elastic Inference with the MXNet Java Infer API, pass Context.eia() as the context when creating the Infer Predictor object. See the MXNet Infer Reference package mxnet; import java.awt.image.BufferedImage; import java.io.File; import java.io.IOException; import java.net.URL; import java.util.Arrays; import java.util.Comparator; import java.util.List; import java.util.stream.IntStream; import org.apache.commons.io.FileUtils; import org.apache.mxnet.infer.javaapi.ObjectDetector; import org.apache.mxnet.infer.javaapi.Predictor; import org.apache.mxnet.javaapi.*; public class Example { public static void main(String[] args) throws IOException { String urlPath = ""; String filePath = System.getProperty("java.io.tmpdir"); // Download Model and Image FileUtils.copyURLToFile(new URL(urlPath + "/resnet/152-layers/resnet-152-0000.params"), new File(filePath + "resnet-152/resnet-152-0000.params")); FileUtils.copyURLToFile(new URL(urlPath + "/resnet/152-layers/resnet-152-symbol.json"), new File(filePath + "resnet-152/resnet-152-symbol.json")); FileUtils.copyURLToFile(new URL(urlPath + "/synset.txt"), new File(filePath + "resnet-152/synset.txt")); FileUtils.copyURLToFile(new URL(""), new File(filePath + "cat.jpg")); List<Context> contexts = Arrays.asList(Context.eia()); Shape inputShape = new Shape(new int[]{1, 3, 224, 224}); List<DataDesc> inputDesc = Arrays.asList(new DataDesc("data", inputShape, DType.Float32(), "NCHW")); Predictor predictor = new Predictor(filePath + "resnet-152/resnet-152", inputDesc, contexts, 0); BufferedImage originalImg = ObjectDetector.loadImageFromFile(filePath + "cat.jpg"); BufferedImage resizedImg = ObjectDetector.reshapeImage(originalImg, 224, 224); NDArray img = ObjectDetector.bufferedImageToPixels(resizedImg, new Shape(new int[]{1, 3, 224, 224})); List<NDArray> predictResults = predictor.predictWithNDArray(Arrays.asList(img)); float[] results = predictResults.get(0).toArray(); List<String> synsetLines = FileUtils.readLines(new File(filePath + "resnet-152/synset.txt")); int[] best = IntStream.range(0, results.length) .boxed().sorted(Comparator.comparing(i -> -results[i])) .mapToInt(ele -> ele).toArray(); for (int i = 0; i < 5; i++) { int ind = best[i]; System.out.println(i + ": " + synsetLines.get(ind) + " - " + best[ind]); } } } More Models and Resources For more tutorials and examples, see: the framework's official Java documentation - Apache MXNet website Troubleshooting MXNet EI is built with MKL-DNN. All operations using Context.cpu() are supported and will run with the same performance as the standard release. MXNet EI does not support Context.gpu(). All operations using that context will throw an error. You cannot allocate memory for NDArray on the remote accelerator by writing something like this: x = NDArray.array(Array(1,2,3), ctx=Context.eia()) This throws an error. Instead you should use Context.cpu(). Look at the previous bind()example to see how MXNet automatically transfers your data to the accelerator as necessary. Sample error message: Elastic Inference is only for production inference use cases and does not support any model training. When you use either the Symbol API or the Module API, do not call the backward()method or call bind()with forTraining=True. This throws an error. Because the default value of forTrainingis True, make sure you set for_training=Falsemanually in cases such as the example in Use Elastic Inference with the MXNet Module API. Sample error using test.py: Because training is not allowed, there is no point of initializing an optimizer for inference. A model trained on an earlier version of MXNet will work on a later version of MXNet EI because it is backwards compatible. For example, you can train a model on MXNet 1.3 and run it on MXNet EI 1.4. However, you may run into undefined behavior if you train on a later version of MXNet. For example, training a model on MXNet Master and running on MXNet EI 1.4. Different sizes of EI accelerators have different amounts of GPU memory. If your model requires more GPU memory than is available in your accelerator, you get a message that looks like the log below. If you run into this message, you should use a larger accelerator size with more memory. Stop and restart your instance with a larger accelerator. Calling reshapeexplicitly by using either the Module or the Symbol API can lead to OOM errors. Implicitly using different shapes for input NDArraysin different forward passes can also lead to OOM errors. Before being reshaped, the model is not cleaned up on the accelerator until the session is destroyed.
https://docs.aws.amazon.com/elastic-inference/latest/developerguide/ei-java.html
2020-10-19T22:31:27
CC-MAIN-2020-45
1603107866404.1
[]
docs.aws.amazon.com
Tutorial : Installing CUDA 8 on Ubuntu 16¶ Introduction¶ CUDA is a parallel computing platform and programming model that makes using a GPU for general purpose computing simple and elegant. The developer still programs in the familiar C, C++, Fortran, or an ever expanding list of supported languages, and incorporates extensions of these languages in the form of a few basic keywords. These keywords let the developer express massive amounts of parallelism and direct the compiler to the portion of the application that maps to the GPU. Below is the steps to perform installation of cuda8 & cuDNN 7.1.4 on ubuntu16 by removing latest nvidia packages on new instance Installation Steps¶ Install Nvidia driver which supports Tesla v100¶ wget chmod +x NVIDIA-Linux-x86_64-440.33.01.run ./NVIDIA-Linux-x86_64-440.33.01.run Install Cuda8 debian packages¶ wget dpkg -i cuda-repo-ubuntu1604-8-0-local-ga2_8.0.61-1_amd64-deb apt-get update apt-get install cuda-8.0 Install patch update for Cuda8¶ wget dpkg -i cuda-r epo-ubuntu1604-8-0-local-cublas-performance-update_8.0.61-1_amd64-deb To Install cuDNN version 7.1.4 for Cuda¶ Download three packages for ubuntu 16 from archive cudnn directory Installation and verification steps available here
https://docs.e2enetworks.com/gpu/cuda.html
2020-10-19T22:03:27
CC-MAIN-2020-45
1603107866404.1
[]
docs.e2enetworks.com
Create and Publish your Mobile App - Updated On 25 May 2020 - 2 Minutes To Read - - DarkLight In this article, we will learn about how to create and publish the Mobile App (available only for Android platform) for your training portal. For setting up the mobile app for your training portal, you will need to request for a mobile APK from Microsoft and setup your Google Playstore account. You’ll also be required to share release manager access with Microsoft to enable automatic updates to the mobile app. Steps to create Mobile App for the training portal Visit Microsoft Community Traininig Helpdesk. Click on Sign in on the top-left corner of the homepage. Use your Azure AD or Social accounts to register and sign in. Click on Create Support Ticket and enter the following values: Provide the following information in the Description section of the Support Ticket - - Application Color Code (HEX format) - This color will be used in the mobile app as shown in the image at the start of the article. For eg, Orange - Portal URL - This is the instance of the platform for which the mobile app will be generated - Application Name - This is the name of the mobile app when published on playstore - Supported Languages - This is the list of languages supported in the mobile app - Application Color Code - #FFA500 - Portal URL - - Application Name - Contoso Learning Center - Language - English, Spanish, Telugu Create a zip file with the following assets and attach it to the form. - App icon with the following dimensions (in pixels): 24x24, 36x36, 48x48, 72x72, 96x96, 144x144. The icon on the phone screen used to launch the app is the app icon. App icon must have transparent background. - Splash screen logo with the following dimensions (in pixels): 150x150, 225x225, 300x300, 450x450. The screen that appears when the app opens is the splash screen, shown below. - Here is sample zip file for reference. That's all! You’ll receive a link on your contact email address to download the mobile APK from our support team (in 2-3 business days). Steps to publish your mobile app to the playstore Before you begin Follow the steps above for creating your mobile app and ensure you have received the download link to your Mobile APK from Microsoft. Sign up on Google Play console in order to publish your app on Google Playstore. Steps to publish your mobile app Download the mobile APK to your computer from the email you received from our support team, after creating your Mobile App. Follow the instructions given here to upload and publish your APK on the Google Play Store. Once you have uploaded and published your APK, navigate to Settings -> Users & Permissions. Click on “Invite New User”. Enter the email address as [email protected]. Leave Access Expiry date as Never. Select the role as Release Manager. Choose your mobile app from Choose an App dropdown. Click on Send Invitation.
https://docs.microsoftcommunitytraining.com/docs/create-publish-mobile-app
2020-10-19T21:02:03
CC-MAIN-2020-45
1603107866404.1
[array(['https://cdn.document360.io/3c07b191-b754-4fbb-8a40-c278daa61bb4/Images/Documentation/image%2877%29.png', 'image.png'], dtype=object) ]
docs.microsoftcommunitytraining.com
ConvertPlug allows you to integrate with external email marketing software that help you store and manage leads obtained through the opt-in forms created using the plugin. Among all the possible integrations, Connects, the inbuilt, tool allows you to integrate with MyEmma too. In order to integrate ConvertPlug with MyEmma, you will have to follow the steps mentioned below. 1. Install the Connects MyEmma Addon Install the Addon. In order to learn how to use the Addon Installer in ConvertPlug, you can refer to the article here. 2. The Addon is now installed 3. Open Connects You need to open the Connects page seen under the Resources section of ConvertPlug. 4. Create a New Campaign You will then find a “Create New Campaign” button that allows you to create a New Campaign. Click on it. 5. Enter a Campaign Name and Select the Third Party Software A Campaign name should be valid, descriptive and understandable, so that you know what kind of leads are stored in it. Select MyEmma from the drop down below. 6. Authenticate your Account Each email marketing software might have a different attribute that may be needed to authenticate your account. For the integration with MyEmma, you need a Public Key, a Private Key and the Account ID. Where will I find the Public Key? i. Go to the Settings & Billing page ii. On the settings page, scroll down and click on the API Key tab Your Public Key, Private Key and Account ID will be seen as above. Note: Private keys are seen when they are first generated. iv.. Copy the information 7. Paste the Public key, Private Key & Account ID
https://docs.brainstormforce.com/how-to-integrate-convertplug-with-myemma/
2020-10-19T21:43:14
CC-MAIN-2020-45
1603107866404.1
[array(['https://docs.brainstormforce.com/wp-content/uploads/2016/11/Select-MyEmma-as-the-third-party-mailer.jpg', 'select-myemma-as-the-third-party-mailer'], dtype=object) array(['https://docs.brainstormforce.com/wp-content/uploads/2016/11/MyEmma-Settings-and-Billing-page-1024x515.jpg', 'myemma-settings-and-billing-page'], dtype=object) array(['https://docs.brainstormforce.com/wp-content/uploads/2016/11/API-Key-tab-in-MyEmma.jpg', 'api-key-tab-in-myemma'], dtype=object) array(['https://docs.brainstormforce.com/wp-content/uploads/2016/11/Authenticate-MyEmma-in-ConvertPlug.jpg', 'authenticate-myemma-in-convertplug'], dtype=object) array(['https://docs.brainstormforce.com/wp-content/uploads/2016/11/Select-MyEmma-list-in-ConvertPlug.jpg', 'select-myemma-list-in-convertplug'], dtype=object) array(['https://docs.brainstormforce.com/wp-content/uploads/2016/11/Submissions-tab-in-ConvertPlug-editor.jpg', 'submissions-tab-in-convertplug-editor'], dtype=object) array(['https://docs.brainstormforce.com/wp-content/uploads/2016/11/Select-appropriate-campaign-in-the-editor.jpg', 'select-appropriate-campaign-in-the-editor'], dtype=object) ]
docs.brainstormforce.com
NetFlow Logic provides a full range of technical documentation on our current products and solutions. This documentation includes installation and configuration guides, technical manuals, release notes, and solution guides. NetFlow Optimizer (NFO) is a software-only processing engine for network flow data: NetFlow, IPFIX, sFlow, J-Flow, etc. External Data Feeder for NFO (EDFN) is a remote component which serves as a knowledge base of information outside of the NetFlow domain.
https://docs.netflowlogic.com/
2020-10-19T21:43:57
CC-MAIN-2020-45
1603107866404.1
[]
docs.netflowlogic.com
The Finance Close Automation application has been deprecated and is no longer maintained. Finance Close Automation overview The Finance Close Automation (FCA) application simplifies the accounting and finance close process for your organization. You can post journal entries, manage timelines for close tasks, and perform end-to-end accounting procedures in a centralized workspace. You can use FCA to: Plan monthly finance close for an accounting period. Create close tasks for accountants. Prepare and post journal entries in your Enterprise Resource Planning (ERP) system. Monitor the close progress in real-time. Enable external auditing of the work done by accountants. Integration with ERPs ServiceNow FCA supports integration with your ERP systems. FCA supports multi-ERP integrations. The integration enables accountants and managers to perform the following activities in FCA: Post journal entries Reverse journal entries Auto-reverse accruals Integration with the following ERPs is available: Integration with SAP FCA integration with SAP automates the journal entry postings through a close task into the SAP system. Integration with Oracle EBS FCA integration with Oracle EBS automates the journal entry postings through a close task into the Oracle system. Integration with Oracle NetSuite FCA integration with Oracle NetSuite automates the journal entry postings through a close task into the Oracle NetSuite system. For information on steps that are required to integrate ERP systems with FCA, see Finance Close Automation integration with ERPs. Integration with GRC ServiceNow FCA supports integration with Governance, Risk, and Compliance (GRC) application. The integration enables the finance users to manage policies, controls, and attestations from within the FCA. When you choose certain criteria in a close task, the corresponding GRC control is auto-populated based on predetermined mapping. For more information, see Finance Close Automation integration with GRC. Guided setup to implement FCA FCA Guided Setup provides a sequence of tasks that help you configure FCA on your ServiceNow instance. To open FCA guided setup, navigate to Finance Close Automation > Guided Setup. For more information about using the guided setup interface, see Using guided setup.
https://docs.servicenow.com/bundle/newyork-finance-operations-management/page/product/financial-close-management/concept/get-started-with-financial-close-mgmt.html
2020-10-19T21:52:13
CC-MAIN-2020-45
1603107866404.1
[]
docs.servicenow.com
Running Keystone in HTTPD¶ mod_proxy_uwsgi¶ The recommended keystone deployment is to have a real web server such as Apache HTTPD or nginx handle the HTTP connections and proxy requests to an independent keystone server (or servers) running under a wsgi container such as uwsgi or gunicorn. The typical deployment will have several applications proxied by the web server (for example horizon on /dashboard and keystone on /identity, /identity_admin, port :5000, and :35357). Proxying allows the applications to be shut down and restarted independently, and a problem in one application isn’t going to affect the web server or other applications. The servers can easily be run in their own virtualenvs. The httpd/ directory contains sample files for configuring HTTPD to proxy requests to keystone servers running under uwsgi. Copy the httpd/uwsgi-keystone.conf sample configuration file to the appropriate location for your Apache server, on Debian/Ubuntu systems it is: /etc/apache2/sites-available/uwsgi-keystone.conf On Red Hat based systems it is: /etc/httpd/conf.d/uwsgi-keystone.conf Update the file to match your system configuration. Enable TLS by supplying the correct certificates. Enable mod_proxy_uwsgi. - On Ubuntu the required package is libapache2-mod-proxy-uwsgi; enable using sudo a2enmod proxy, sudo a2enmod proxy_uwsgi. - On Fedora the required package is mod_proxy_uwsgi; enable by creating a file /etc/httpd/conf.modules.d/11-proxy_uwsgi.confcontaining LoadModule proxy_uwsgi_module modules/mod_proxy_uwsgi.so Enable the site by creating a symlink from the file in sites-available to sites-enabled, for example, on Debian/Ubuntu systems (not required on Red Hat based systems): ln -s /etc/apache2/sites-available/uwsgi-keystone.conf /etc/apache2/sites-enabled/ Start or restart HTTPD to pick up the new configuration. Now configure and start the uwsgi services. Copy the httpd/keystone-uwsgi-admin.ini and httpd/keystone-uwsgi-public.ini files to /etc/keystone. Update the files to match your system configuration (for example, you’ll want to set the number of processes and threads for the public and admin servers). Start up the keystone servers using uwsgi: $ sudo pip install uwsgi $ uwsgi /etc/keystone/keystone-uwsgi-admin.ini $ uwsgi /etc/keystone/keystone-uwsgi-public.ini mod_wsgi¶ Warning Running Keystone under HTTPD in this configuration does not support the use of Transfer-Encoding: chunked. This is due to a limitation with the WSGI spec and the implementation used by mod_wsgi. It is recommended that all clients assume Keystone will not support Transfer-Encoding: chunked. Copy the httpd/wsgi-keystone.conf sample configuration file to the appropriate location for your Apache server, on Debian/Ubuntu systems it is: /etc/apache2/sites-available/wsgi-keystone.conf On Red Hat based systems it is: /etc/httpd/conf.d/wsgi-keystone.conf Update the file to match your system configuration. Note the following: - Make sure the correct log directory is used. Some distributions put httpd server logs in the apache2directory and some in the httpddirectory. - Enable TLS by supplying the correct certificates. Keystone’s primary configuration file ( etc/keystone.conf) and the PasteDeploy configuration file ( etc/keystone-paste.ini) must be readable to HTTPD in one of the default locations described in Configuring Keystone. Configuration file location can be customized using the OS_KEYSTONE_CONFIG_DIR environment variable: if this is set, the keystone.conf file will be searched inside this directory. Arbitrary configuration file locations can be specified using OS_KEYSTONE_CONFIG_FILES variable as semicolon separated entries, representing either configuration directory based relative paths or absolute paths. Enable the site by creating a symlink from the file in sites-available to sites-enabled, for example, on Debian/Ubuntu systems (not required on Red Hat based systems): ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled/ Restart Apache to have it start serving keystone. Access Control¶ If you are running with Linux kernel security module enabled (for example SELinux or AppArmor) make sure that the file has the appropriate context to access the linked file. Keystone Configuration¶ Make sure that when using a token format that requires persistence, you use a token persistence driver that can be shared between processes. The SQL and memcached token persistence drivers provided with keystone can be shared between processes. Warning The KVS ( kvs) token persistence driver cannot be shared between processes so must not be used when running keystone under HTTPD (the tokens will not be shared between the processes of the server and validation will fail). For SQL, in /etc/keystone/keystone.conf set: [token] driver = sql For memcached, in /etc/keystone/keystone.conf set: [token] driver = memcache All servers that are storing tokens need a shared backend. This means that either all servers use the same database server or use a common memcached pool.
https://docs.openstack.org/keystone/ocata/apache-httpd.html
2018-08-14T13:20:12
CC-MAIN-2018-34
1534221209040.29
[]
docs.openstack.org
Configure the AWS Admin account for your Amazon Web Service Create a separate account in AWS for use by the MID Server. Before you beginRole required: Admin Procedure Navigate to AWS Discovery > Accounts. Click New. Use the following information to fill out the AWS Account form: Name: Descriptive name for this account. Account ID: The Account ID that you receive from Amazon. The Account ID can be found on your Billing Management Console in AWS, under the Account field. Click Submit.
https://docs.servicenow.com/bundle/geneva-it-operations-management/page/product/discovery/task/t_ConfigureAdminAccountForAWS.html
2018-08-14T13:55:49
CC-MAIN-2018-34
1534221209040.29
[]
docs.servicenow.com
Discover SAN storage on vCenter By default, Discovery prevents the vCenter probe from returning SAN storage data from a vCenter server, starting with Geneva Patch 6. Before you beginRole required: discovery_admin, admin About this taskThe VMware - vCenter probe contains a probe parameter called vCenter.getSANStorageInfo that controls Discovery of Storage Area Networks (SAN) storage systems used by vCenter. This parameter is set to false by default to block Discovery from returning SAN data for vCenter. This is done to reduce the size of the payload returned by the VMware - vCenter probe. Note: With this parameter set to true for vCenters that use SAN storage, payload size might exceed 5MB.To configure Discovery to return SAN data for vCenter: Procedure Navigate to Discovery Definition > Probes. Open the VMware - vCenter probe record. In the Probe Parameters related list, open the record for the vCenter.getSANStorageInfo parameter. Set the Value field to true. Figure 1. Probe parameter setting Click Update.
https://docs.servicenow.com/bundle/geneva-it-operations-management/page/product/discovery/task/t_DiscoverSANStorageOnVCenter.html
2018-08-14T13:55:48
CC-MAIN-2018-34
1534221209040.29
[]
docs.servicenow.com
Alarms Use the Alarms screen to display detailed information about alarms generated by Viptela devices. Screen Elements - Title bar. - Filter bar—Includes the Filter drop-down and time periods. Click the Filter icon to display a drop-down menu to add filters for searching alarms. Click a predefined or custom time period for which to display data. - Alarms Histogram—Displays a graphical representation of all alarms. To hide the alarms histogram, click the Alarms Histogram title or the down angle bracket to the right of it. - Alarms legend—Select a severity level to display alarms generated by Viptela devices in that classification. To return to the full legend, select a time period in the Filter bar. - Filter criteria—Sort options drop-down and Search box, for a Contains or Match string. - Alarms table. Set Alarm Filters To set filters for searching alarms generated by one or more Viptela devices: - Click the Filter drop-down menu. - In the Severity drop-down, select the alarm severity level. You can specify more than one severity level. To ease troubleshooting, alarms generated by Viptela devices are collected by vManage NMS and classified as: - Critical—indicates that action needs to be taken immediately. - Major—indicates that the problem needs to be looked into but is not critical enough to bring down the network. - Medium—indicates the time when a major alarm is cleared. - Minor—is informational only. - In the Active drop-down, select active or cleared. Active alarms are alarms that are currently on the device but have not been acknowledged. - In the Alarm Name drop-down, either search or select the alarm name for which to view generated alarms. You can select more than one alarm name. - Click Search Alarms to search for alarms that match the filter. vManage NMS displays the alarms both in table and graphical format. View Alarm Details To view detailed information about any alarm: - Select the alarm row from the table. - Click the More Actions icon to the right of the row, and click Details. The Alarm Details window opens, displaying possible cause of the alarm, impacted entities, and other details.
https://sdwan-docs.cisco.com/Product_Documentation/vManage_Help/Release_16.2/Monitor/Alarms
2018-08-14T13:32:53
CC-MAIN-2018-34
1534221209040.29
[array(['https://sdwan-docs.cisco.com/@api/deki/files/3495/G00331.png?revision=1', 'G00331.png'], dtype=object) ]
sdwan-docs.cisco.com
Recent Email Count¶ This Condition located on the Ministry category tab in Search Builder allows you to find people based on how many emails they have received through TouchPoint for a specified number of days. Enter a whole number as the Value for the number of emails and for the number of days. See also
http://docs.touchpointsoftware.com/SearchBuilder/QB-RecentEmailCount.html
2018-08-14T14:09:18
CC-MAIN-2018-34
1534221209040.29
[]
docs.touchpointsoftware.com
Modifiers for ps_2_0 and Above Instruction modifiers affect the result of the instruction before it is written into the destination register. This section contains reference information for the instruction modifiers implemented by pixel shader version 2_0 and above. Centroid The centroid modifier is an optional modifier that clamps attribute interpolation within the range of the primitive when a multisample pixel center is not covered by the primitive. This can be seen in Centroid Sampling. You can apply the centroid modifier to an assembly instruction as shown here: dcl_texcoord0_centroid v0 You can also apply the centroid modifier to a semantic as shown here: float4 TexturePointCentroidPS( float4 TexCoord : TEXCOORD0_centroid ) : COLOR0 { return tex2D( PointSampler, TexCoord ); } In addition, any Input Color Register (v#) declared with a color semantic will automatically have centroid applied. Gradients computed from attributes that are centroid sampled are not guaranteed to be accurate. Partial Precision The partial precision instruction modifier (_pp) indicates areas where partial precision is acceptable, provided that the underlying implementation supports it. Implementations are always free to ignore the modifier and perform the affected operations in full precision. The _pp modifier can occur in two contexts: - On a texture coordinate declaration to enable passing texture coordinates. - On any instruction including texture load instructions. This indicates that the implementation is allowed to execute the instruction with partial precision and store a partial precision result. In the absence of an explicit modifier, the instruction must be performed at full precision (regardless of the input precision). Examples: dcl_texcoord0_pp t1 cmp_pp r0, r1, r2, r3 Saturate The saturate instruction modifier (_sat) saturates (or clamps) the instruction result to the range [0, 1] before writing to the destination register. The _sat instruction modifier can be used with any instruction except frc - ps, sincos - ps, and any tex* instructions. For ps_2_0, ps_2_x, and ps_2_sw, the _sat instruction modifier cannot be used with instructions writing to any output registers (Output Color Register or Output Depth Register). This restriction does not apply to ps_3_0 and above. Example: dp3_sat r0, v0, c1 Related topics
https://docs.microsoft.com/en-us/windows/desktop/direct3dhlsl/dx9-graphics-reference-asm-ps-instructions-modifiers-ps-2-0
2018-08-14T14:13:27
CC-MAIN-2018-34
1534221209040.29
[]
docs.microsoft.com
MyBB Documentation › Review Process The Extend platform provides an official, central directory of plugins, themes and other resources for MyBB. The Team assesses submissions in order to prevent issues related to: - licensing, - low quality, - security, - privacy, - unclear code, - incompatibility, - undesirable effects on the Community or the Web. Major problems related to new submissions, when spotted, can prevent it from appearing on the platform. Extension maintainers can additionally request technical reviews of specific builds (versions) to make it possible for other users to easily find safe extensions. Project Approval Submissions of new projects, along with the initial build, must reviewed by the MyBB staff in order to appear on the platform. Projects submitted by Approved Developers are added immediately. The Project Approval process involves basic checks performed to reduce duplicate submissions and prevent potential abuse of the Extend platform. Build Review Selected builds can be submitted for review by project authors. Build-level reviews are more intensive and can be interpreted as limited security audits (especially in plugin submissions). If the review passes successfully, the build is visually marked as reviewed and safe for use. While the licensing and overall quality are being checked for submissions of all types, the principles of technical assessment differ and are explained below. Plugins Plugin packages’ structure and types of files and operations can vary depending on intended purpose, however the package and its code should: - not overwrite MyBB’s core files — hooks should be used instead, - not cause negative, unexpected or undocumented effects, - be free of vulnerabilities and security issues, - be easy to read and understand, which includes proper indenting, self-descriptive naming of variables, functions, classes, namespaces and files, consistency across the codebase and following a common style guide (like PSR-2), - be optimized to reduce unnecessary resource usage, - be compatible with latest versions of the web stack (PHP, MyBB-supported databases and HTTP servers). Themes Theme packages should be limited to: - XML files containing the theme data intended to be processed by MyBB, - default package’s index.htmlfiles, - front-end design & UI functionality files (like JavaScript files, images in common graphic extensions), - metadata files (like licenses or README files). Loading of resources from external locations, excluding common CDN sites, is not allowed. Translations Translation packages should be limited to: - PHP files that begin with the <?phptag and contain either: - string values assigned to the $langinfovariable language in manifest files and the $lvariable in standard language files, which should not contain or cause output of any additional executable code (CSS, HTML, PHP, etc.) when compared with the default language package, - PHP comments, - empty lines. - default package’s index.htmlfiles, - images with texts in the translated language, - metadata files. Graphics Graphics-related submissions should only contain files in common raster or vector graphic formats. Optional metadata files are also allowed. Hosting code in git repositories We encourage authors to host and maintain their extensions with the git version control system, which is also used for MyBB core development. Platforms like GitHub, GitLab or Bitbucket provide free hosting of git repositories which allow easier contribution, management and code inspection. Using code repositories can often speed up the approval and review process of Extend submissions.
https://docs.mybb.com/extend/review-process/
2018-08-14T13:52:59
CC-MAIN-2018-34
1534221209040.29
[]
docs.mybb.com
- Résumé - Sommaire - Extraits - Descriptif - À propos de l'auteur - Lecture - Offert ! Ernest Renan, "Qu'est-ce qu'une nation ?" : commentaireAccédez à la dissert' du jour ! The Identification and Reduction of Risk in Project Management Résumé du mémoire The main goal of this report is to discuss the process of identification and risk reduction in project management. The report will target two vital tools and techniques which are crucial for risk management and its significance. The level of failures among large, medium and even small projects is such that the topic of risk and management analysis gives room for a detailed research to provide answers to a number of questions. Among the many favored queries, one of the key questions asked is whether companies are genuinely aware of the different methods and techniques available for risk and management analysis in today's world. Or to articulate it the other way, whether too many projects exist which lack a well designed and thought out risk assessment process. The reasons could be numerous. However, the most common reason could be the lack of qualified project managers. Another question could be the need for in depth research in order to complement what currently exists. This article does not focus on answering all the questions directly. The aim however is to review and explain the context, the benefits and the importance of risk management. Many projects fail due to unforeseen or unavoidable reasons (as per Chapman and Ward). This does not declare that many projects can also falter due to avoidable reasons. Sommaire du mémoire - Literature review - Introduction - Chasing the risk - Professional bodies' contribution - Conclusion - Success and Failure in Project Management - Introduction - Measuring the project's outcome - Conclusion - The Identification and Reduction of the Risk - Introduction - Risk management - Riskman methodology - A case of risk management Extraits du mémoire [...] The definition baseline: concerns the particular route settled upon to achieve the project requirements. It is the target in terms of high-level product design, function and performance, the costs involved, the contractual aspects and the plan committed (CARTER, B. HANCOCK, T. MORIN, JM. ROBINS, N). The technical baseline: is the response to the definition baseline requirements, which embodies finalising the design of the product architecture and its implementation, and the means, of ensuring that it satisfies the definition requirements (CARTER, B. [...] [...] system Station Pipes construction & SCADA system construction Laying pipes Laying pipes Laying pipes Laying pipes Laying pipes in normal across river across various in slushy in offshore terrain crossings terrain location Pump Delivery Scraper Offshore stations stations stations terminal Survey Land Statutory Power Design and Material Works Implementation acquisition clearance supply detailed procurement contract engineering Figure 3 - Work breakdown structure of ?Cross-country Petroleum Pipeline? project Level project Level work packages Level work packages Level activities of each work package Appendix 6 Table 8 - Probability and severity of risk factors Table 9 - The cost data (Million for each package against various responses Appendix 7 Figure 4 - Decision tree for pipeline laying work package Figure 5 - Decision tree for river crossing work package Figure 6 - Decision tree for station construction work package Figure 7 - Decision tree for telecommunication and cathodic protection work package Appendix 8 Table 10 - The EMV for pipeline laying project Table 11 - The EMV for pipeline laying across river Table 12 - The EMV for station construction Table 13 - The EMV for telecommunication and SCADA system Table 14 - The decisions emerge from the decision tree approach of risk management For each work package Bibliography Baccarini, D. ?Risk management Australian Style Theory vs. Practice? Proceedings of the Project Management Institute Annual Seminars & Symposium, Nov. Carter, B. [...] [...] The decision options are located at the lowest level. Then comes the prioritisation procedure; the decision-maker aims at the determination of the importance of the elements at each level of the hierarchy. Then, the project risk manager compares pair wise the elements of each level, taking into consideration their importance to make the decision that is being considered. This enables the creation of comparison matrices (Table appendix 4). The next phase, in which relative weights are derived to the elements of each level, is the computing phase. [...] [...] Max Wideman (1990, 2001) suggests the ?dimensions of the project environment?. Today, project managers need to be attuned to the cultural, organisational and social environments of the project[xxvii]. He emphasises the need to identify the project stakeholders and specifically their ability to affect its successful outcome. Max Wideman (1990, 2001) defines the dimensions of the project environment as follows; the project time environment, the internal project culture, the original corporate culture, and the external social surroundings. He concluded by arguing that the satisfaction of all the participants may reflect the degree of success of a project. [...] [...] Of these reported details on a failed IT project. Key Findings Over of the projects that were analyzed were deemed to have failed by the respondents. More than three quarters blew their schedules by 30% or more; more than half exceeded their budgets by a substantial margin. Considering that an estimated $25 billion is spent on IT application development in Canada annually, the survey data clearly indicate that unbudgeted IT project expenditures must run into the billions of dollars. The main causes of project failure that were identified were: 1. [...] À propos de l'auteurChef d'Entreprise - Customer Support Manager Management organisation - Niveau - Expert - Etude suivie - MBA in... Descriptif du mémoire - Date de publication - 2007-01-12 - Date de mise à jour - 2007-01-12 - Langue - anglais - Format - Word - Type - mémoire - Nombre de pages - 56 pages - Niveau - expert - Téléchargé - 44 fois - Validé par - le comité de lecture
https://docs.school/business-comptabilite-gestion-management/management-et-organisation/memoire/identification-reduction-risque-management-projet-22142.html
2018-08-14T14:25:46
CC-MAIN-2018-34
1534221209040.29
[array(['https://ncd1.docsnocookie.school/vapplication/image/icons/cat-BC.png', None], dtype=object) array(['https://ncd1.docsnocookie.school/vapplication/image/icons/cat-BC.png', None], dtype=object) ]
docs.school
ParquetSerDe A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet. Contents - BlockSizeBytes The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose uses this value for padding calculations. Type: Integer Valid Range: Minimum value of 67108864. Required: No - Compression The compression code to use over data blocks. The possible values are UNCOMPRESSED, SNAPPY, and GZIP, with the default being SNAPPY. Use SNAPPYfor higher decompression speed. Use GZIPif the compression ration is more important than speed. Type: String Valid Values: UNCOMPRESSED | GZIP | SNAPPY Required: No - EnableDictionaryCompression Indicates whether to enable dictionary compression. Type: Boolean Required: No - MaxPaddingBytes The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0. Type: Integer Valid Range: Minimum value of 0. Required: No - PageSizeBytes The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB. Type: Integer Valid Range: Minimum value of 65536. Required: No - WriterVersion Indicates the version of row format to output. The possible values are V1and V2. The default is V1. Type: String Valid Values: V1 | V2 Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/firehose/latest/APIReference/API_ParquetSerDe.html
2018-09-18T21:35:29
CC-MAIN-2018-39
1537267155702.33
[]
docs.aws.amazon.com
API Builder Standalone Add entire publication Select collection Cancel Create a New Collection Set as my default collection Cancel This topic and sub-topics have been added to MyDocs. Ok No Collection has been selected. --> Create a custom flow-node This document describes how to create a custom flow-node. Introduction As an example of how to write a flow-node, we will examine creating and customizing a sample encodeURI flow-node that URI encodes a string. Creating and customizing the sample flow-node includes the following steps: Create the project Customize the flow-node definition Customize the flow-node method implementation Prerequisites Install API Builder per the getting started instructions in the API Builder Getting Started Guide. Install the Axway Flow SDK per the install instructions in the Axway Flow SDK. Create the sample encodeURI flow-node This tutorial will demonstrate how to create a custom flow-node for use in the API Builder Flow editor UI. For this example, we will create a flow node that will encode a URI given a string as an input using the encodeURI function. Step 1: Create the project Create and build the sample encodeURI flow-node. Create a new flow-node plugin axway-flow -n encodeuri -d "URI encoder." cd api-builder-plugin-fn-encodeuri npm install npm run build Step 2: Customize the flow-node definition Customize the encodeURI flow-node definition in the index.js file. const sdk = require('axway-flow-sdk'); const action = require('./action'); function getFlowNodes() { const flownodes = sdk.init(module); flownodes .add('encodeuri', { name: 'Encode URI', icon: 'icon.svg', description: 'URI encoder.', category: 'utils' }) .method('encode', { name: 'Encode URI', description: 'Encodes a URI by replacing each instance of certain characters with UTF-8 encodings.' }) .parameter('uri', { description: 'The URI to encode.', type: 'string' }) .output('next', { name: 'Next', description: 'The URI was encoded successfully.', context: '$.encodedURI', schema: { type: 'string' } }) .action(action); return Promise.resolve(flownodes); } exports = module.exports = getFlowNodes; To explain what occurs in the index.js file, we will break the file down piece by piece. index.js is exporting a function that returns a promise that will resolve with the flow-node specifications. This approach allows for the asynchronous creation of the flow-node specifications which can be used, for example, to load resources from the network or perform asynchronous parsing. In the sample flow-node plugin, the actual specification is defined in getFlowNodes(). Describe the flow-node, name, description, category, and icon: .add('encodeuri', { name: 'Encode URI', icon: 'icon.svg', description: 'URI encoder.', category: 'utils' }) The name is the text that is displayed in the Flow Editor. The default icon is a placeholder (a star) that should be replaced with a graphic that represents the action of the flow-node. The icon is displayed at 28 pixels x 28 pixels. The category is the section in the Flow Editor tool panel where the flow-node is contained. Add a method to the flow-node and describe its parameters: .method('encode', { name: 'Encode URI', description: 'Encodes a URI by replacing each instance of certain characters with UTF-8 encodings.' }) .parameter('uri', { description: 'The URI to encode.', type: 'string' }) A method called encode, that is displayed in the Flow Editor as Encode URI, was added. The encode method has a single parameter. If there was more than one parameter, we would repeat the .parameter(name, schema) block. The second value in the parameter method is a JSON Schema that describes the parameter type. Describe the possible outputs from the method: .output('next', { name: 'Next', description: 'The URI was encoded successfully.', context: '$.encodedURI', schema: { type: 'string' } }) The outputs section defines the possible outcomes of the flow-node. In this simple case there is just one output; however, flow-nodes can have multiple outputs with different return types. For example, this flow-node could have added an error output to indicate that encoding failed. Define the implementation: .action(action); The action() expects a function that will be passed the request details parameter and a callback object parameter. Step 3: Customize the flow-node method implementation To simplify management of the code, the starter project puts the implementation of the methods in the action.js file. There is not a requirement to follow this pattern, you can structure your project how best suits your needs. exports = module.exports = function (req, cb) { const uri = req.params.uri; if (!uri) { return cb('invalid argument'); } cb.next(null, encodeURI(uri)); }; This is a simple scenario, but it highlights the main features. The parameters for the flow-node method are accessed under the req.params parameter. In this example, the parameter for the encode method is defined as uri: .parameter('uri', { description: 'The URI to encode.', type: 'string' }) The logic checks that the parameter is set. If uri is not set, it fires a generic error callback. return cb('invalid argument'); These errors are not handled and will abort the flow execution. In general, avoid doing this for any expected error scenarios. If there are known error situations, it is better to define an output for those scenarios and allow the flow designer the flexibility to specify what to do when an error occurs. If uri is set, the fallback for the next output is fired. The name of this callback will match the name of the output defined in the method. For example, if you defined an output encoderError, then there would be a callback cb.encoderError(). The encoded string is passed to the callback as the methods output value. Related Links
https://docs.axway.com/bundle/API_Builder_4x_allOS_en/page/create_a_custom_flow-node.html
2018-09-18T21:43:27
CC-MAIN-2018-39
1537267155702.33
[]
docs.axway.com
Using Transformers Transformers convert message payloads to formats expected by their destinations. Mule provides many standard transformers, which you configure using predefined elements and attributes in your Mule XML configuration file. You can also configure custom transformers using the <custom-transformer> element, in which you specify the fully qualified class name of the custom transformer class. For more information on creating and configuring a custom transformer, see Creating Custom Transformers. Standard transformers are easier to use than custom transformers. You don’t need to know the Java name of the transformer, and all properties are explicitly declared in a Mule configuration schema. Following is an example of declaring the standard Append String transformer, which appends string text to the original message payload: <append-string-transformer If the original message payload was the string "foo", the transformer above would convert the string to "foo … that’s good to know!". The Available Transformers section of this page describes all the standard transformers provided with Mule. Additionally, many transports and modules have their own transformers, such as the ObjectToJMSMessage transformer for the JMS transport. Configuring Transformers You can configure a transformer locally or globally. You configure a local transformer right on the endpoint or in a flow or where you want to apply it, whereas you configure a global transformer before any <model> or <flow> elements in your Mule configuration file and then reference it. For example, the following code defines two global transformers, which are referenced from two different places: Chaining Transformers You can chain transformers together so that the output from one transformer becomes the input for the next. To chain transformers, you create a space-separated list of transformers in the transformer-refs or responseTransformer-refs attributes or by creating multiple <transformer> elements as shown above. For example, this chain ultimately converts from a ByteArray to InputStream: transformer-refs="ByteArrayToString StringToObject ObjectToInputStream" You could also configure this as follows: Note that if you specify transformer chains, any default transformers or discoverable transformers are not applied. If you want to use those transformers, you must specify them explicitly with the other chained transformers. Transformation Best Practices Mule has an efficient transformation mechanism. Transformers are applied to inbound or outbound endpoints, and the data is transformed just before it is sent or received from an endpoint. Transformers can be concatenated, so it is simple to perform multiple transformations on data in transit. There is no one standard approach for how and where transformations should occur. Some maintain that because transformation should always be applied on inbound/outbound data, transformations should be available as part of the enterprise service bus instead of inside the components. This approach matches the concepts of Aspect Oriented Programming (AOP). Others conclude that it is far more efficient to encode the transformation logic into the components themselves. In the second case, however, there is no distinction between code that is related to a business process and code that is generic enough to be reused, which contradicts the philosophy of an enterprise service bus. While there is no industry best practice, MuleSoft recommends that developers examine their transformation logic to see if it will always be used (AOP) or if it is specific to a business process. In general, if it’s always used, you should use a transformer, and if it is specific to a single business process, it should be part of the component. Note the following cases where you should not configure a transformer: Default transformers: some transports have default transformers that are called by default, but only if you don’t configure explicit transformations. Discoverable transformers: some transformers can be discovered and used automatically based on the type of message. You do not configure these transformers explicitly. These include custom transformers that have been defined as discoverable. For more information, see Creating Custom Transformers. Available Transformers Following are the transformers available with Mule. Some transformers are specific to a transport or module. For more information, see Transports Reference and Modules Reference. For a complete reference to the elements and attributes for the standard Mule transformers, see Transformers Configuration Reference. Basic The basic transformers are in the org.mule.transformer.simple package. They do not require any special configuration. For details on these transformers, see Transformers Configuration Reference. XML The XML transformers are in the org.mule.module.xml.transformer package. They provide the ability to transform between different XML formats, use XSLT, and convert to POJOs from XML. For information, see XML Module Reference. JSON The JSON transformers are in the org.mule.module.json.transformers package. They provide the ability to work with JSON documents and bind them automatically to Java objects. For information, see Native Support for JSON. Scripting The Scripting transformer transforms objects using scripting, such as JavaScript or Groovy scripts. This transformer is in the org.mule.module.scripting.transformer package. Encryption The encryption transformers are in the org.mule.transformer.encryption package. Compression The compression transformers are in the org.mule.transformer.compression package. They do not require any special configuration. Encoding The encoding transformers are in the org.mule.transformer.codec package. They do not require any special configuration. The Email transport provides several transformers for converting from email to string, object to MIME, and more. For details, see Email Transport Reference. File The File transport provides transformers for converting from a file to a byte array (byte[]) or a string. For details, see File Transport Reference. HTTP The HTTP connector provides several transformers for converting an HTTP response to a Mule message, map or string, and for converting a message to an HTTP request or response. For details, see HTTP Connector. JDBC *Enterprise* The Mule Enterprise version of the JDBC transport provides transformers for moving CSV and XML data from files to databases and back. For details, see JDBC Transport Reference. JMS The JMS Transport Reference and Mule WMQ Transport Reference (enterprise only) both provide transformers for converting between JMS messages and several different data types. Strings and Byte Arrays The Multicast Transport Reference and TCP Transport Reference both provide transformers that convert between byte arrays and strings. XMPP The XMPP transport provides transformers for converting between XMPP packets and strings. For details, see XMPP Transport Reference. Custom Mule supports the ability to build Custom Transformer. Build custom transformers to meet specific data conversion needs in your application. Common Attributes Following are the attributes that are common to all transformers. ignoreBadInput If set to true, the transformer ignores any data that it does not know how to transform, but any transformers following it in the current chain is called. If set to false, the transformer also ignores any data that it does not know how to transform, but no further transformations takes place.
https://docs.mulesoft.com/mule-user-guide/v/3.8/using-transformers
2018-09-18T20:58:44
CC-MAIN-2018-39
1537267155702.33
[]
docs.mulesoft.com
PDD 3: Calling Conventions Abstract Parrot's inter-routine calling conventions. Synopsis Not applicable. Description. Common Features of Argument/Return Opcodes.) Flag Words; Common Flag Word Bits. Passing Arguments, Returning Values. Flag Word Bits For 'Setting' These bits of each flag word have these meanings specific to set_args and set_returns: - 4The value is a literal constant, not a register. (Don't set this bit yourself; the assembler will do it.) CONSTANT - 5If this bit is set on a PMC value, then the PMC must be an aggregate. The contents of the aggregate, rather than the aggregate itself, will be passed.If the FLAT(P only) - 6 (unused) - 7 (unused) - 8 (unused) - 9When the FLAT bit is also set, behavior is as described above in the "FLAT" section. Otherwise, this bit may only be set on a unique string constant specifying the name of the next argument (or returned value). NAMED( FLATor string constant only) NAMEDbit is also set, the aggregate will be used as a hash; its contents, as key/value pairs, will be passed as named arguments. The PMC must implement the full hash interface. {{ GH #252: Limit the required interface. }}If the NAMEDbit is not set, the aggregate will be used as an array; its contents will be passed as positional arguments.The meaning of this bit is undefined when applied to integer, number, and string values. Accepting Parameters, Accepting Return Values.) Flag Word Bits For 'Getting' These bits of each flag word have these meanings specific to get_params and get_results: - 4 (unused) - 5If SLURPY(P only) - 6 (unused) - 7If this bit is set on a register for which no value has been passed, no exception will be raised; rather, the register will be set to a default value: a Null PMC for P, an empty string for S, or zero for N or I. OPTIONAL - 8An I register with this bit set is set to one if the immediately preceding OPTIONAL register received a value; otherwise, it is set to zero. If the preceding register was not marked OPTIONAL, the behavior is undefined; but we promise you won't like it. OPT_FLAG(I only) - 8XXX - PROPOSED ONLY - XXXIf. READONLY(P only) - 8 (unused for S and N) - 9When the SLURPY bit is also set, behavior is as described above in the "SLURPY" section. Otherwise, this bit may only be set on a unique string constant specifying the name of the next parameter (or returned value). NAMED( SLURPYor string constant only) NAMEDbit is also set, the aggregate will be the HLL-specific hash type and the contents will be all unassigned _named_ arguments.If the NAMEDbit is not set, the aggregate will be the HLL-specific array type and the contents will be all unassigned positional arguments. Overflow and underflow. Ordering of named values (outgoing) Named values (arguments, or values to return) must be listed textually after all the positional values. FLAT and non- FLAT values may be mixed in any order. Ordering of named targets (incoming) Named targets (parameters, or returned values) must appear after all the positional targets. A SLURPY positional target, if present, must be the last positional target; a SLURPY named target, if present, must be the last named target. So the acceptable ordering of targets is: - positional non-SLURPY (any number) - positional SLURPY array (optional) - NAMED non-SLURPY (any number) - NAMED SLURPY hash (optional) Mixing named and positional values Positional targets can only be filled with positional values. Named targets can be filled with either positional or named values. However, if a named target was already filled by a positional value, and then a named value is also given, this is an overflow error. Type Conversions Unlike the set_* opcodes, the get_* opcodes must perform conversion from one register type to another. Here are the conversion rules: - When the target is an I, N, or S register, storage will behave like an assign(standard conversion). - When the target and source are both P registers, storage will behave like a set(pass by reference). - When the target is a P register and the source is an integer, the P will be set to a new Integer[1] which has been assigned the given integer. - When the target is a P register and the source is a number, the P will be set to a new Float[1] which has been assigned the given number. - When the target is a P register and the source is a string, the P will be set to a new String[1] which has been assigned the given string. [1] or some other type specified by the current HLL type map, which may substitute an alternative type for each default low-level Parrot type (array, hash, string, number, etc.). Implementation Not applicable. Bugs Required features are missing: - Specific exceptions to throw for specific errors. PIR Syntax Examples Function Calls .local pmc foo, i, ar, y, p, value, kw, a, b, c, z # ... foo(1, i) # 2 positional arguments foo(x, ar :flat, y) # flattening array foo(p, 'key' => value) # named argument foo(p, value :named('key')) # the same foo(kw :named :flat) # a flattening hash # all together now: three positional (one flat) with two named (one flat) foo(a, b, c :flat, 'x' => 3, 'y' => 4, z :flat :named('z')) Parameters .sub foo .param int i # positional parameter .param pmc argv :slurpy # slurpy array .param pmc value :named('key') # named parameter .param int x :optional # optional parameter .param int has_x :opt_flag # flag 0/1 x was passed .param pmc kw :slurpy :named # slurpy hash # ... .end Return Values .sub foo .local pmc i, ar, value .return (i, ar: flat, value :named('key') ) .end Call Results .local pmc x, foo, i, j, ar, value x = foo() # single result (i, j :optional, ar :slurpy, value :named('key') ) = foo() References pdd23_exceptions.pod
http://docs.parrot.org/parrot/latest/html/docs/pdds/pdd03_calling_conventions.pod.html
2016-12-03T04:37:38
CC-MAIN-2016-50
1480698540839.46
[]
docs.parrot.org
.. _static_directive: ``static`` ---------- Use of the ``static`` ZCML directive or allows you to serve static resources (such as JavaScript and CSS files) within a Pyramid application. This mechanism makes static files available at a name relative to the application root URL. Attributes ~~~~~~~~~~ ``name`` The (application-root-relative) URL prefix of the static directory. For example, to serve static files from ``/static`` in most applications, you would provide a ``name`` of ``` and/or ``Cache-Control`` headers, when any static file is served from this directive. This defaults to 3600 (5 minutes). Optional. ``permission`` Used to specify the :term:`permission` required by a user to execute this static view. This value defaults to the string ``__no_permission_required__``. The ``__no_permission_required__`` string is a special sentinel which indicates that, even if a :term: :term:`authorization policy`) a particular permission. Examples ~~~~~~~~ .. topic:: Serving Static Files from an Absolute Path .. code-block:: xml :linenos:
http://docs.pylonsproject.org/projects/pyramid_zcml/en/latest/_sources/zcml/static.txt
2016-12-03T04:39:38
CC-MAIN-2016-50
1480698540839.46
[]
docs.pylonsproject.org
Hurricane of 1938 The words and pictures in this manuscript are the property of the author, Richard V. Simpson. Readers, who take excerpts of the narrative or download images for use in the reader’s document, please reference the source of the words or pictures. Submissions from 2012 The Great Hurricane and Tidal Wave of 1938: Scenes of the Disaster in Rhode Island’s East Bay, Richard V. Simpson
http://docs.rwu.edu/hurricane_1938/
2016-12-03T04:34:53
CC-MAIN-2016-50
1480698540839.46
[]
docs.rwu.edu
QubeTM server, etc.). The algorithm is AES 128 bits. Note that 256 bits cipher is not used because it's not supported by default on all Java Virtual Machines (see this article). How to generate the secret key A unique secret key is shared between all the parts of the SonarQubeTM sonar.secretKeyPath property in conf/sonar.properties and restart the server. If you want to encrypt properties that are used by code analyzers, then copy the file on all the required machines. Use the same sonar.secretKeyPath property to change the default location. When this is done, you can start encrypting settings. How to encrypt settings The administration console used to generate the secret key allows also to encrypt text values. Simply copy the encrypted texts in the appropriate locations..
http://docs.codehaus.org/pages/viewpage.action?pageId=231081730
2014-12-18T05:44:56
CC-MAIN-2014-52
1418802765616.69
[array(['/download/attachments/135200868/create-user.png?version=1&modificationDate=1337172911570&api=v2', 'create-user.png'], dtype=object) array(['/download/attachments/135200868/change-password.png?version=1&modificationDate=1337096901534&api=v2', 'change-password.png'], dtype=object) array(['/download/attachments/135200868/create-group.png?version=1&modificationDate=1337173089625&api=v2', 'create-group.png'], dtype=object) array(['/download/attachments/135200868/add-user-to-group-1.png?version=1&modificationDate=1337173465608&api=v2', 'add-user-to-group-1.png'], dtype=object) array(['/download/attachments/135200868/add-user-to-group-2.png?version=1&modificationDate=1337173487788&api=v2', 'add-user-to-group-2.png'], dtype=object) array(['/download/attachments/135200868/default-roles-new-projects.png?version=1&modificationDate=1337174342651&api=v2', 'default-roles-new-projects.png'], dtype=object) array(['/download/attachments/135200868/global-settings-security.png?version=1&modificationDate=1337098205072&api=v2', 'global-settings-security.png'], dtype=object) ]
docs.codehaus.org
. - Mutex - multi-thread variable protection. - Threads - raii style one-shot thread class. - Threadable - an inheritable thread interface. Include the following at the top of any translation unit that uses these container classes. @ref ecl::Thread "Thread" class is a raii style object which initialises and automatically starts a thread when constructed and manages the thread cleanly when the thread object goes out of scope. <b>Construction:</b>: - inherit the Threadable class. - implement the runnable() method. - call the start() method to begin running in a sepearate thread. Note that it will not spawn multiple threads - it has a check that ensures it will only execute one thread at any point in time. It is designed to be something more akin to a thread function object rather than a thread factory. - src/test/mutex.cpp - src/test/threads.cpp - src/test/threadable.cpp - <b>May 11</b> : Updated exception handling (now optional). - <b>May 10</b> : Mutex win32 implementation. - <b>May 10</b> : Cmake win32 framework. - <b>Jan 10</b> : @ref ecl::Threadable "Threadable" implements the thread by inheritance concept. - <b>Jan 10</b> : Adds a constructor for @ref ecl::Thread "threads" that allows configuration of the stack size allocation. - <b>Jul 09</b> : Incorporates the use of function object loading (refer to ecl_utilities). - <b>Jul 09</b> : Converted @ref ecl::Thread "threads" to raii style. - <b>Jun 09</b> : A locking class, the @ref ecl::Mutex "mutex", for threads.
http://docs.ros.org/en/melodic/api/ecl_threads/html/
2021-04-10T14:44:00
CC-MAIN-2021-17
1618038057142.4
[]
docs.ros.org
February 2021: My ACP Decisions Release Notes Here are the latest enhancements we've made to improve your experience across the My ACP Decisions platforms. What's New in the My ACP Decisions Content Library We've added 2 videos and 12 documents in Spanish 🇪🇸 and Persian 🇮🇷. Advance Directives - Choosing a Health Care Agent Before Surgery (Handout) 🇪🇸 COVID-19 - COVID-19 mRNA Vaccines (Pfizer and Moderna) (Condensed) 🇪🇸 - COVID-19 mRNA Vaccines (Pfizer and Moderna) (Extended) 🇪🇸 - COVID-19 mRNA Vaccines (Pfizer and Moderna) (Handout) (Extended) 🇪🇸 - COVID-19 mRNA Vaccines (Pfizer and Moderna) (Handout) (Condensed) 🇪🇸 CPR - CPR: Advanced Cancer (Handout) 🇮🇷 - CPR: Advanced Disease (Handout) 🇮🇷 - CPR: Advanced Heart Failure (Handout) 🇮🇷 - CPR: General Overview for Hospitalized Patients with Serious Illness (Handout) 🇮🇷 Creating a Legacy - Creating a Legacy (Handout) 🇪🇸 Goals of Care - Goals of Care: Advanced Cancer (Handout) 🇮🇷 - Goals of Care: Advanced Disease (Handout) 🇮🇷 - Goals of Care: Advanced Heart Failure (Handout) 🇮🇷 - Goals of Care: General Overview (Handout) 🇮🇷 Catching Up On Our Most Recent Blog Posts Spotlight on Vulnerable Populations: Using Patient Decision Aids to Promote Healthcare Equity (3 min read) Patients should never feel they have received suboptimal care or were treated differently because of their race, ethnicity, or socioeconomic status. However, healthcare disparities are an ongoing reality for minority and underserved communities throughout the United States. Any Questions or Feedback? Let us know what you think. We appreciate your input!
https://docs.app.acpdecisions.org/article/759-february-2021-release-notes
2021-04-10T14:49:55
CC-MAIN-2021-17
1618038057142.4
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5519a29fe4b0221aadf24142/images/5f74fb4a4cedfd0017dcd394/file-kUHZHYwlxp.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5519a29fe4b0221aadf24142/images/603df9c424d2d21e45edb6c7/file-XmupyXhE8C.png', None], dtype=object) ]
docs.app.acpdecisions.org
Overview of summary-based search acceleration Searches over large datasets can take a long time to complete. This isn't a problem if you run such searches on an infrequent basis. But if you are like many users of Splunk Enterprise, you do not have this luxury. Large dataset searches must be run on schedules, made the basis for panels in popular dashboards, or run ad-hoc frequently by large numbers of users. Splunk Enterprise offers several approaches to speeding up searches of large datasets. One of these approaches is summary-based search acceleration. This is where you create a data summary that is populated by background runs of a slow-completing search. The summary is a smaller dataset that contains only data that is relevant to your search. When you run the search against the summary, the search should complete much faster. There are three methods of summary-based search acceleration: - Report acceleration - Uses automatically-created summaries to speed up completion times for certain kinds of event searches. - Data model acceleration - Uses automatically-created event summaries to speed up completion times for data-model-based searches. - Summary indexing - Populates a summary index using a scheduled search that you define. You can create summary indexes of event data, or you can convert your event data into metrics and summarize it in metrics summary indexes. Report and data model acceleration work only with event data. You can create summary indexes for either event data or metric data. Comparing summary-based search acceleration methods. See Configure batch mode search. This documentation applies to the following versions of Splunk® Enterprise: 8.1.0, 8.1.1, 8.1.2, 8.1.3 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/Splunk/8.1.2/Knowledge/Aboutsummaryindexing
2021-04-10T15:22:46
CC-MAIN-2021-17
1618038057142.4
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
class ViewFormatter implements HttpFormatter A formatter for turning caught exceptions into "pretty" HTML error pages. For certain known error types, we display pages with dedicated information relevant to this class of error, e.g. a page with a search form for HTTP 404 "Not Found" errors. We look for templates in the views/error directory. If no specific template exists, a generic "Something went wrong" page will be displayed, optionally enriched with a more specific error message if found in the translation files. Constants Properties Methods No description Create an HTTP Response to represent the error we are handling. Details __construct(Factory $view, TranslatorInterface $translator, SettingsRepositoryInterface $settings) ResponseInterface format(HandledError $error, ServerRequestInterface $request) Create an HTTP Response to represent the error we are handling. This method receives the error that was caught by Flarum's error handling stack, along with the current HTTP request instance. It should return an HTTP response that explains or represents what went wrong.
https://api.docs.flarum.org/php/master/flarum/foundation/errorhandling/viewformatter
2021-04-10T15:02:33
CC-MAIN-2021-17
1618038057142.4
[]
api.docs.flarum.org
Index configuration options¶ access_plain_attrs¶ Specifies how search daemon will access index’s plain attributes (bigint, bool, float, timestamp, uint). Optional, default_plain_attrs = mlock access_blob_attrs¶ This mode specifies how index’s blob attributes file is accessed. Optional, default value_blob_attrs = mmap access_doclists¶ This mode defines how index’s doclists file is accessed. Optional, default is file. Possible values are file and mmap. Refer to Accessing index files for detailed explanation of the values. Search daemon can read data from index’s_doclists = mmap access_hitlists¶ This mode defines how index’s hitlists file is accessed. Optional, default is file. Possible values are file and mmap. Refer to Accessing index files for detailed explanation of the values. Search daemon can read data from index’s_hitlists = file attr_update_reserve¶ Sets the space to be reserved for blob attribute updates. Optional, default value is 128k. When blob attributes (MVAs, strings, JSON), are updated, their length may change. If the updated string (or MVA, or JSON) is shorter than the old one, it overwrites the old one in the .SPB file. But if the updated string is longer, attr_update_reserve=1M bigram_freq_words¶ A list of keywords considered “frequent” when indexing bigrams. Optional, default is empty.. Example: bigram_freq_words = the, a, you, i bigram_index¶ Bigram indexing mode. Optional, default is none.. Example: bigram_index = both_freq blend_chars¶ Blended characters list. Optional, default is empty.. Blended characters can be remapped, so that multiple different blended characters could be normalized into just one base form. This is useful when indexing multiple alternative Unicode codepoints with equivalent glyphs. Example: blend_chars = +, &, U+23 blend_chars = +, &->+ blend_mode¶ Blended tokens indexing mode. Optional, default is trim_none. charset_table¶ Accepted characters table, with case folding rules. Optional, default value are latin and cyrillic characters. charset_table is the main workhorse of Manticore ‘A’ as allowed to occur within keywords and maps it to destination char ‘a’ (but does not declare ‘a’ 32 are always treated as separators. Characters with codes 33 to 127, ie. 7-bit ASCII characters, can be used in the mappings as is. To avoid configuration file encoding issues, 8-bit ASCII characters and Unicode characters must be specified in U+xxx form, where ‘xxx’ is hexadecimal codepoint number. This form can also be used for 7-bit ASCII characters to encode special ones: eg. use U+2E to encode dot, U+2C to encode comma. Aliases “english” and “russian” are allowed at control character mapping. Example: # default are English and Russian letters charset_table = 0..9, A..Z->a..z, _, a..z, \ U+410..U+42F->U+430..U+44F, U+430..U+44F, U+401->U+451, U+451 # english charset defined with alias charset_table = 0..9, english, _ So if you want your search to support different languages you will need to define sets of valid characters and folding rules for all of them what can be quite a laborious task. We have performed this task for you by preparing default charset tables, non-cjk and cjk, that comprise non-cjk and cjk-languages respectively. These charsets should be sufficient to use in most cases. The languages that are currently NOT supported: - Assamese - Bishnupriya - Buhid - Garo - Hmong - Ho - Komi - Large Flowery Miao - Maba - Maithili - Marathi - Mende - Mru - Myene - Ngambay - Odia - Santali - Sindhi - Sylheti All other languages listed in the following list are supported by default: Unicode languages list. To be able to work with both cjk and non-cjk languages you should set the options in your configuration file as shown below: charset_table = non_cjk ... ngram_len = 1 ngram_chars = cjk In case you don’t need support for cjk-languages you can just omit ngram_len and ngram_chars options. For more information on those see the appropriate documentation sections. dict¶ The keywords dictionary type. Known values are ‘crc’ and ‘keywords’. . Optional, default is ‘keywords’. Keywords dictionary mode (dict=keywords), (greatly) reduces indexing impact and enable substring searches on huge collections. That mode is supported both for disk and RT indexes. CRC dictionaries never store the original keyword text in the index. Instead, keywords are replaced with their control sum value (calculated using FNV64) both when searching and indexing, and that value is used internally in the index. That approach has two drawbacks. First, there is a chance of control sum collision between several pairs of different keywords, growing quadratically with the number of unique keywords in the index. However, it is not a big concern. Manticore alleviated that by pre-indexing all the possible substrings as separate keywords (see min_prefix_len, fixes both these drawbacks. It stores the keywords in the index and performs search-time wildcard expansion. For example, a search for a ‘test*‘prefix could internally expand to ‘test|tests|testing’ query based on the dictionary contents. That expansion is fully transparent to the application, except that the separate per-keyword statistics for all the actually matched keywords would now also be reported. For substring (infix) search extended wildcards may be used. Special symbols like ‘?’ and ‘%’ embedded_limit¶ Embedded exceptions, wordforms, or stopwords file size limit. Optional, default is 16K.. Example: embedded_limit = 32K exceptions¶ Tokenizing exceptions file. Optional, default is empty. Exceptions allow to map one or more tokens (including tokens with characters that would normally be excluded) to a single keyword. They are similar to wordforms in that they also perform mapping, but have a number of important differences. Small enough files are stored in the index header, see embedded_limit for details. ‘+’ as a valid character, but still want to be able search for some exceptions from this rule such as ‘C++’. expand_keywords¶ Expand keywords with exact forms and/or stars when possible. The value can additionally enumerate options such us exact and star. Optional, default is 0 (do not expand keywords). ) (as expand_keywords = 1 or expand_keywords = star,exact) or expansion limited by exact option even infixes enabled for index running -> ( running | =running ) (as expand_keywords = exact) global_idf¶ The path to a file with global (cluster-wide) keyword IDFs. Optional, default is empty (use local IDFs). is suggested (but not required) to use an .idf extension. When the IDF file is specified for a given index and OPTION global_idf is set to 1, the engine will use the keyword frequencies and collection documents counts dict.txt --stats switch first, then converting those to .idf format using --buildidf, then merging all .idf files across cluser using --mergeidf. Refer to indextool command reference for more information. Example: global_idf = /usr/local/sphinx/var/global.idf ha_strategy¶ Agent mirror selection strategy, for load balancing. Optional, default is random. The strategy used for mirror selection, or in other words, choosing a specific agent mirror in a distributed index. Essentially, this directive controls how exactly master does the load balancing between the configured mirror agent nodes. The following strategies are implemented: Simple random balancing¶ ha_strategy = random The default balancing mode. Simple linear random distribution among the mirrors. That is, equal selection probability are assigned to every mirror. Kind of similar to round-robin (RR), but unlike RR, does not impose a strict selection order. Adaptive randomized balancing¶: - latency stats are accumulated, in blocks of ha_period_karma seconds; - once per karma period, latency-weighted probabilities get recomputed; - once per request (including ping requests), “dead or alive” flag is adjusted. Currently,: - initial percentages: 0.25, 0.25, 0.25, 0.2%; - observed latencies: 10 ms, 5 ms, 30 ms, 3 ms; - inverse latencies: 0.1, 0.2, 0.0333, 0.333; - scaled percentages: 0.025, 0.05, 0.008333, 0.0833; - renormalized percentages: 0.15, 0.30, 0.05, 0.50.. Round-robin balancing¶. hitless_words¶ Hitless words list. Optional, allowed values are ‘all’, or a list file name. By default, Manticore html_index_attrs¶; html_remove_elements¶ html_strip¶, ‘te<B>st</B>’ text will be indexed as a single keyword ‘test’, however, ‘te<P>st</P>’ will be indexed as two keywords ‘te’ and ‘ ignore_chars¶ Ignored characters list. Optional, default is empty. Useful in the cases when some characters, such as soft hyphenation mark (U+00AD), should be not just treated as separators but rather fully ignored. For example, if ‘-’ is simply not in the charset_table, “abc-def” text will be indexed as “abc” and “def” keywords. On the contrary, if ‘-’ index_exact_words¶ Whether to index the original keywords along with the stemmed/remapped versions. Optional, default is 0 (do not index). index_field_lengths¶ Enables computing and storing of field lengths (both per-document and average per-index values) into the index. Optional, default is 0 (do not compute and store). When index_field_lengths is set to 1, indexer will 1) create a respective length attribute for every full-text field, sharing the same name but with __len_ suffix;, Manticore used a simplified, stripped-down variant of BM25 that, unlike the complete function, did not account for document length. (We later realized that it should have been called BM15 from the start.) Also we added support for both a complete variant of BM25, and its extension towards multiple fields, called BM25F. They require per-document length and per-field lengths, respectively. Hence the additional directive. Example: index_field_lengths = 1 index_sp¶ Whether to detect and index sentence and paragraph boundaries. Optional, default is 0 (do not detect and index). exclamationbruary”). - index_token_filter¶ Index-time token filter for index. Optional, default is empty. Index-time token filter gets created by indexer on indexing source data into index or by RT index on processing INSERT or REPLACE statements and let you implement a custom tokenizer that makes tokens according to custom rules. Plugins defined as library name:plugin name:optional string of settings. Example: index_token_filter = my_lib.so:custom_blend:chars=@#& index_zones¶ A list of in-field HTML/XML zones to index. Optional, default is empty (do not index zones). Extended query syntax. Example: index_zones = h*, th, title infix_fields¶ The list of full-text fields to limit infix indexing to. Applies to dict=crc only. Optional, default is empty (index all fields in infix mode). Similar to prefix_fields, but lets you limit infix-indexing to given fields. Example: infix_fields = url, domain inplace_enable¶ Whether to enable in-place index inversion. Optional, default is 0 (use separate temporary files).place_hit_gap¶ In-place inversion fine-tuning option. Controls preallocated hitlist gap size. Optional, default is 0. This directive does not affect searchd in any way, it only affects indexer. Example: inplace_hit_gap = 1M inplace_reloc_factor¶ inplace_reloc_factor fine-tuning option. Controls relocation buffer size within indexing memory arena. Optional, default is 0.1. This directive does not affect searchd in any way, it only affects indexer. Example: inplace_reloc_factor = 0.1 inplace_write_factor¶ inplace_write_factor fine-tuning option. Controls in-place write buffer size within indexing memory arena. Optional, default is 0.1. This directive does not affect searchd in any way, it only affects indexer. Example: inplace_write_factor = 0.1 killlist_target¶ Sets the index(es) that the kill-list will be applied to. Optional, default value is empty. When you use Plain indexes you often need to maintain not a single index, but a set of them to be able to add/update/delete new documents sooner (read Delta index updates). In order to suppress matches in the previous (main) index that were updated or deleted in the next (delta) index you need to: - Create a kill-list in the delta index using sql_query_killlist - Specify main index as killlist_targetin delta index settings: index delta { killlist_target = main:kl } When killlist_target is specified, kill-list is applied to all the indexes listed in it on searchd startup. If any of the indexes from killlist_target are rotated, kill-list is reapplied to these indexes. When kill-list is applied, indexes that were affected save these changes to disk. killlist_target has 3 modes of operation: killlist_target = main:kl. Document ids from the kill-list of the delta index are suppressed in the main index (see sql_query_killlist). killlist_target = main:id. All document ids from delta index are suppressed in the main index. Kill-list is ignored. killlist_target = main. Both document ids from delta index and its kill-list are suppressed in the main index. Multiple targets can be specified separated by comma like killlist_target = index_one:kl,index_two:kl. You can change killlist_target settings for an index without reindexing it by using ALTER: ALTER TABLE delta KILLLIST_TARGET='new_main_index:kl' But since the ‘old’ main index has already written the changes to disk, the documents that were deleted in it will remain deleted even if it is no longer in the killlist_target of the delta index. local¶. Before dist_threads, there also was a legacy solution to configure searchd to query itself instead of using local indexes (refer to agent for the details). However, that creates redundant CPU and network load, and dist_threads is now strongly suggested instead. Example: local = chunk1 local = chunk2 The same can be written in one line: local = chunk1,chunk2 (all ‘local’ records will be read left-to-right, top-to-bottom and all the indexes will be merged into one big list. So there is is no difference whether you list them in one ‘local’ line or distribute to several lines). max_substring_len¶: -. Example: max_substring_len = 12 min_infix_len¶ Minimum infix prefix length to index and search. Optional, default is 0 (do not index infixes), and minimum allowed non-zero value is 2. Infix length setting enables wildcard searches with term patterns like ‘start’, ‘end’, ‘middle’, and so on. It also lets you disable too short wildcards if those are too expensive to search for. Perfect word matches can be differentiated from infix infixes and full words, and thus perfect word matches can’t be ranked higher. However, query time might vary greatly, depending on how many keywords the substring will actually expand to. Short and frequent syllables like ‘in’ or ‘ti’ ‘a’ are not allowed for performance reasons. (While in theory it is possible to scan the entire dictionary, identify keywords matching on just a single character, expand ‘a’ to an OR operator over 100,000+ keywords, and evaluate that expanded query, in practice this will very definitely kill your server.) When mininum infix length is set to a positive number, mininum prefix length is considered 1. For dict=keywords word infixing and prefixing cannot be both enabled at the same. For dict=crc it is possible to specify only some fields to have infixes declared with infix_fields and other fields to have prefixes declared with prefix_fields, but it’s forbidden to declare same field in both lists. In case of dict=keywords, beside the wildcard * two other wildcard characters can be used: ?can match any(one) character: t?stwill match test, but not teast %can match zero or one character : tes%will match tesor test, but not testing Example: min_infix_len = 3 min_prefix_len¶ Minimum word prefix length to index. Optional, default is 0 (do not index prefixes). Prefix indexing allows to implement wildcard searching by ‘word. Perfect word matches can be differentiated from prefix prefixes and full words, and thus perfect word matches can’t be ranked higher. When mininum infix length is set to a positive number, mininum prefix length is considered 1. Example: min_prefix_len = 3 min_stemming_len¶ Minimum word length at which to enable stemming. Optional, default is 1 (stem everything). min_word_len¶ Minimum indexed word length. Optional, default is 1 (index everything). Only those words that are not shorter than this minimum will be indexed. For instance, if min_word_len is 4, then ‘the’ won’t be indexed, but ‘they’ will be. Example: min_word_len = 4 mirror_retry_count¶ Same as index_agent_retry_count. If both values provided, mirror_retry_count will be taken, and the warning about it will be fired. mlock¶ Manticore. On Linux platforms where Manticore service is managed by systemd, you can use LimitMEMLOCK=infinity in the unit file. Newer releases use a systemd generator instead of a simple systemd unit (to detect if jemalloc can be used instead of standard malloc). In these cases one should add LimitMEMLOCK to the generator file located usually at /lib/systemd/system-generators/manticore-generator and run systemctl daemon-reload to perform the unit file update. If mlock() fails, a warning is emitted, but index continues working. Example: mlock = 1 Warning The functionality of this directive is taken over by access_plain_attrs and access_blob_attrs directives as of 3.0.2. morphology¶ Manticore implements: lemmatizers, stemmers, and phonetic algorithms. - Lemmatizer reduces a keyword form to a so-called lemma, a proper normal form, or in other words, a valid natural language root word. For example, “running” could be reduced to “run”, the infinitive verb form, and “octopi” would be reduced to “octopus”, the singular noun form. Note that sometimes a word form can have multiple corresponding root words. For instance, by looking at “dove” it is not possible to tell whether this is a past tense of “dive” the verb as in “He dove into a pool.”, or “dove” the noun as in “White dove flew over the cuckoo’s nest.” In this case lemmatizer can generate all the possible root forms. - Stemmer reduces a keyword form to a so-called stem by removing and/or replacing certain well-known suffixes. The resulting stem is however not guaranteed to be a valid word on itself. For instance, with a Porter English stemmers “running” would still reduce to “run”, which is fine, but “business” would reduce to “busi”, which is not a word, and “octopi” would not reduce at all. Stemmers are essentially (much) simpler but still pretty good replacements of full-blown lemmatizers. - Phonetic algorithms replace the words with specially crafted phonetic codes that are equal even when the words original are different, but phonetically close. The morphology processors that come with our own built-in Manticore implementations are: - English, Russian, and German lemmatizers; - English, Russian, Arabic, and Czech stemmers; - SoundEx and MetaPhone phonetic algorithms. You can also link with libstemmer library for even more stemmers (see details below). With libstemmer, Manticore also supports morphological processing for more than 15 other languages. Binary packages should come prebuilt with libstemmer support, too. Lemmatizers require a dictionary that needs to be additionally downloaded from the Manticore is also available.. Manticore: - none - do not perform any morphology processing; - lemmatize_ru - apply Russian lemmatizer and pick a single root form; - lemmatize_en - apply English lemmatizer and pick a single root form; - lemmatize_de - apply German lemmatizer and pick a single root form; - lemmatize_ru_all - apply Russian lemmatizer and index all possible root forms; - lemmatize_en_all - apply English lemmatizer and index all possible root forms; - lemmatize_de_all - apply German lemmatizer and index all possible root forms; - stem_en - apply Porter’s English stemmer; - stem_ru - apply Porter’s Russian stemmer; - stem_enru - apply Porter’s English and Russian stemmers; - stem_cz - apply Czech stemmer; - stem_ar - apply Arabic stemmer; - soundex - replace keywords with their SOUNDEX code; - metaphone - replace keywords with their METAPHONE code. - rlp_chinese - apply Chinese text segmentation using Rosette Linguistics Platform - rlp_chinese_batched - apply Chinese text segmentation using Rosette Linguistics Platform with document batching Additional values provided by libstemmer are in ‘lib morphology_skip_fields¶ A list of fields there morphology preprocessors do not apply. Optional, default is empty (apply preprocessors to all fields). Used on indexing there only exact form of words got stored for defined fields. Example: morphology_skip_fields = tags, name ngram_chars¶-gram characters cannot appear in the charset_table. Example: ngram_chars = U+3000..U+2FA1F Also you can use an alias for our default N-gram table as in the example below. It should be sufficient in most cases. Example: ngram_chars = cjk ngram_len¶ Manticore and internally split into 1-grams too, resulting in “B C” “D E F” query, still with quotes that are the phrase matching operator. And it will match the text even though there were no separators in the text. Even if the search query is not segmented, Manticore should still produce good results, thanks to phrase based ranking: it will pull closer phrase matches (which in case of N-gram CJK words can mean closer multi-character word matches) to the top. Example: ngram_len = 1 ondisk_attrs¶ Allows for fine-grain control over how attributes are loaded into memory when using indexes with external storage. It is possible to keep attributes on disk. Although, the daemon does map them to memory and the OS loads small chunks of data on demand. This leaves plenty of free memory for cases when you have large collections of pooled attributes (string/JSON/MVA) or when you’re using many indexes per daemon that don’t consume memory. Note that this option also affects RT indexes. When it is enabled, all attribute updates will be disabled, and also all disk chunks of RT indexes will behave described above. However inserting and deleting of docs from RT indexes is still possible with enabled ondisk_attrs. Possible values: - 0 - disabled and default value, all attributes are loaded in memory - 1 - all attributes stay on disk. Daemon loads no files (.spa, .spb). This is the most memory conserving mode, however it is also the slowest as the whole doc-id-list and block index doesn’t load. - pool - only pooled attributes stay on disk. Pooled attributes are string, MVA, and JSON attributes (.spb file). Scalar attributes stored in docinfo (.spa file) load as usual. This option does not affect indexing in any way, it only requires daemon restart. Example: ondisk_attrs = pool #keep pooled attributes on disk Warning The functionality of this directive is taken over by access_plain_attrs and access_blob_attrs directives as of 3.0.2. The option is marked as deprecated and will be removed in future versions. The equivalent values are : * ondisk_attrs = 0 - access_plain_attrs=mmap_preread and access_blob_attrs=mmap_preread * ondisk_attrs = pool - access_plain_attrs=mmap_preread and access_blob_attrs=mmap * ondisk_attrs = 1 - access_plain_attrs=mmap and access_blob_attrs=mmap overshort_step¶ Position increment on overshort (less that min_word_len) keywords. Optional, allowed values are 0 and 1, default is 1. This directive does not affect searchd in any way, it only affects indexer. Example: overshort_step = 1 path¶ Index files path and file name (without extension). Mandatory. Path specifies both directory and file name, but without extension. indexer will append different extensions to this path when generating final names for both permanent and temporary index files. Permanent data files have several different extensions starting with ‘.sp’; temporary files’ extensions start with ‘.tmp’. It’s safe to remove .tmp* files is if indexer fails to remove them automatically. For reference, different index files store the following data: .spastores document attributes .spbstores blob attributes: strings, MVA, json .spdstores matching document ID lists for each word ID; .sphstores index header information; .sphistores histograms of attribute values; .spistores word lists (word IDs and pointers to .spdfile); .spkstores kill-lists; .spmstores a bitmap of killed documents; .sppstores hit (aka posting, aka word occurrence) lists for each word ID; .sptstores additional data structures to speed up lookups by document ids; .spestores skip-lists to speed up doc-list filtering Example: path = /var/data/test1 phrase_boundary¶_step¶ Phrase boundary word position increment. Optional, default is 0. On phrase boundary, current word position will be additionally incremented by this number. See phrase_boundary for details. Example: phrase_boundary_step = 100 prefix_fields¶ The list of full-text fields to limit prefix indexing to. Applies to dict=crc only. preopen¶ read_buffer_docs¶ Per-keyword read buffer size for document lists. Optional, default is 256K, minimal is 8K This is same as read_buffer_docs in searchd config section, but manages size on per-index basis, overriding any more general settings. The meaning is the same as one in corresponding searchd option. Example: read_buffer_docs = 128K read_buffer_hits¶ Per-keyword read buffer size for hit lists. Optional, default is 256K, minimal is 8K This is same as read_buffer_hits in searchd config section, but manages size on per-index basis, overriding any more general settings. The meaning is the same as one in corresponding searchd option. Example: read_buffer_hits = 32K regexp_filter¶ Regular expressions (regexps) to filter the fields and queries with. Optional, multi-value, default is an empty list of regexps. In certain applications (like product search) there can be many different ways to call a model, or a product, or a property, and so on. For instance, ‘iphone 3gs’ and ‘iphone 3 gs’ (or even ‘iphone3 gs’) are very likely to mean the same product. Or, for a more tricky example, ‘13-inch’, ‘13 inch’, ‘13“‘, and ‘13in’ in a laptop screen size descriptions do mean the same. Regexps provide you with a mechanism to specify a number of rules specific to your application to handle such cases. In the first ‘iphone 3gs’ example, you could possibly get away with a wordforms files tailored to handle a handful of iPhone models. However even in a comparatively simple second ‘13-inch’ example there is just way too many individual forms and you are better off specifying rules that would normalize both ‘13-inch’ and ‘13in’ to something identical. Regular expressions listed in regexp_filter are applied in the order they are listed. That happens at the earliest stage possible, before any other processing, even before tokenization. That is, regexps are applied to the raw source fields when indexing, and to the raw search query text when searching. We use the RE2 engine to implement regexps. So when building from the source, the library must be installed in the system and Manticore must be configured built with a --with-re2 switch. Binary packages should come with RE2 builtin. Example: # index '13"' as '13inch' regexp_filter = \b(\d+)\" => \1inch # index 'blue' or 'red' as 'color' regexp_filter = (blue|red) => color rlp_context¶ RLP context configuration file. Mandatory if RLP is used. Example: rlp_context = /home/myuser/RLP/rlp-context.xml rt_attr_bigint¶ BIGINT attribute declaration. Multi-value (an arbitrary number of attributes is allowed), optional. Declares a signed 64-bit attribute. Example: rt_attr_bigint = guid rt_attr_bool¶ Boolean attribute declaration. Multi-value (there might be multiple attributes declared), optional. Declares a 1-bit unsigned integer attribute. Example: rt_attr_bool = available rt_attr_float¶ Floating point attribute declaration. Multi-value (an arbitrary number of attributes is allowed), optional. Declares a single precision, 32-bit IEEE 754 format float attribute. Example: rt_attr_float = gpa rt_attr_json¶ JSON attribute declaration. Multi-value (ie. there may be more than one such attribute declared), optional. Refer to sql_attr_json for more details on the JSON attributes. Example: rt_attr_json = properties rt_attr_multi_64¶ Multi-valued attribute (MVA) declaration. Declares the BIGINT (signed 64-bit) MVA attribute. Multi-value (ie. there may be more than one such attribute declared), optional. Applies to RT indexes only. Example: rt_attr_multi_64 = my_wide_tags rt_attr_multi¶ Multi-valued attribute (MVA) declaration. Declares the UNSIGNED INTEGER (unsigned 32-bit) MVA attribute. Multi-value (ie. there may be more than one such attribute declared), optional. Applies to RT indexes only. Example: rt_attr_multi = my_tags rt_attr_string¶ String attribute declaration. Multi-value (an arbitrary number of attributes is allowed), optional. Example: rt_attr_string = author rt_attr_timestamp¶ Timestamp attribute declaration. Multi-value (an arbitrary number of attributes is allowed), optional. Example: rt_attr_timestamp = date_added rt_attr_uint¶ Unsigned integer attribute declaration. Multi-value (an arbitrary number of attributes is allowed), optional. Declares an unsigned 32-bit attribute. Example: rt_attr_uint = gid rt_field¶ Full-text field declaration. Multi-value, mandatory rt_mem_limit¶ RAM chunk size limit. Optional, default is 128M. source¶ stopwords¶. You can turn that off with stopwords_unstemmed. Small enough files are stored in the index header, see indexer command reference. Top keywords from that dictionary can usually be used as stopwords. Example: stopwords = /usr/local/sphinx/data/stopwords.txt stopwords = stopwords-ru.txt stopwords-en.txt Alternatively, just as in the case with charset_table and ngram_chars options, you can use one of our default stopwords files. Currently stopwords for 50 languages are available. Here is the full list of aliases for them: - af - Africaans - ar - Arabic - bg - Bulgarian - bn - Bengali - ca - Catalan - ckb- Curd E.g., to use stopwords for Italian language, just put the following line in your config file: stopwords = it If you need to use stopwords for multiple languages you should list all their aliases, separated with commas: stopwords = en, it, ru stopword_step¶ Position increment on stopwords. Optional, allowed values are 0 and 1, default is 1. This directive does not affect searchd in any way, it only affects indexer. Example: stopword_step = 1 stopwords_unstemmed¶ Whether to apply stopwords before or after stemming. Optional, default is 0 (apply stopword filter after stemming)., ‘Andes’ gets stemmed to ‘and’ by our current stemmer implementation, so when ‘and’ is a stopword, ‘Andes’ is also stopped. stopwords_unstemmed directive fixes that issue. When it’s enabled, stopwords are applied before stemming (and therefore to the original word forms), and the tokens are stopped when token == stopword. Example: stopwords_unstemmed = 1 type¶ Index type. Known values are plain, distributed, rt, template and percolate. Optional, default is ‘plain’ (plain local index). Manticore supports several different types of indexes. Plain local indexes are stored and processed on the local machine. Distributed indexes involve not only local searching but querying remote searchd instances over the network as well (see Distributed searching). Real-time indexes (or RT indexes for short) are also stored and processed locally, but additionally allow for on-the-fly updates of the full-text index (see Real-time indexes). Note that attributes can be updated on-the-fly using either plain local indexes or RT ones. Template indexes are actually a pseudo-indexes because they do not store any data. That means they do not create any files on your hard drive. But you can use them for keywords and snippets generation, which may be useful in some cases, and also as templates to inherit real indexes from them. Index type setting lets you choose the needed type. By default, plain local index type will be assumed. Example: type = distributed wordforms¶. Small enough files are stored in the index header, see embedded_limit for details. Dictionaries are used to normalize incoming words both during indexing and searching. Therefore, to pick up changes in wordforms file it’s required to rotate index. Word forms support in Mantic.... ~run > walk # Along with stem_en morphology enabled replaces 'run', 'running', 'runs' (and any other words that stem to just 'run') to 'walk' You can specify multiple destination tokens: s02e02 > season 2 episode 2 s3 e3 > season 3 episode 3 Example: wordforms = /usr/local/sphinx/data/wordforms.txt wordforms = /usr/local/sphinx/data/alternateforms.txt wordforms = /usr/local/sphinx/private/dict*.txt.
https://docs.manticoresearch.com/3.0.2/html/conf_options_reference/index_configuration_options.html
2021-04-10T14:38:12
CC-MAIN-2021-17
1618038057142.4
[]
docs.manticoresearch.com
Microsoft Store for Business and Microsoft Store for Education overview Applies to - Windows 10 - Windows 10 Mobile Important Starting on April 14th, 2021, only free apps will be available in Microsoft Store for Business and Education. For more information, see Microsoft Store for Business and Education. Designed for organizations, Microsoft Store for Business and Microsoft Store for Education give IT decision makers and administrators in businesses or schools a flexible way to find, acquire, manage, and distribute free and paid apps in select markets to Windows 10 devices in volume. IT administrators can manage Microsoft Store apps and private line-of-business apps in one inventory, plus assign and re-use licenses as needed. You can choose the best distribution method for your organization: directly assign apps to individuals and teams, publish apps to private pages in Microsoft Store, or connect with management solutions for more options. Important Customers who are in the Office 365 GCC environment or are eligible to buy with government pricing cannot use Microsoft Store for Business. Features Organizations or schools of any size can benefit from using Microsoft Store for Business or Microsoft Store for Education: - Scales to fit the size of your business - For smaller businesses, with Azure AD accounts or Office 365 accounts and Windows 10 devices, you can quickly have an end-to-end process for acquiring and distributing content using the Store for Business. For larger businesses, all the capabilities of the Store for Business are available to you, or you can integrate Microsoft Store for Business with management tools, for greater control over access to apps and app updates. You can use existing work or school accounts. - Bulk app acquisition - Acquire apps in volume from Microsoft Store for Business. - Centralized management – Microsoft Store provides centralized management for inventory, billing, permissions, and order history. You can use Microsoft Store to view, manage and distribute items purchased from: - Microsoft Store for Business – Apps acquired from Microsoft Store for Business - Microsoft Store for Education – Apps acquired from Microsoft Store for Education - Office 365 – Subscriptions - Volume licensing - Apps purchased with volume licensing - Private store - Create a private store for your business that’s easily available from any Windows 10 device. Your private store is available from Microsoft Store on Windows 10, or with a browser on the Web. People in your organization can download apps from your organization's private store on Windows 10 devices. - Flexible distribution options - Flexible options for distributing content and apps to your employee devices: - Distribute through Microsoft Store - Microsoft Store. - Office app launcher Office apps while working with Microsoft Store for Business. - Find a partner – Search and find a Microsoft Partner who can assist you with Microsoft solutions for your business. Prerequisites You'll need this software to work with Store for Business and Education. Required - Admins working with Store for Business and Education need a browser compatible with Microsoft Store running on a PC or mobile device. Supported browsers include: Internet Explorer 10 or later, or current versions of Microsoft Edge, Chrome or Firefox. JavaScript must be supported and enabled. - Employees using apps from Store for Business and Education need at least Windows 10, version 1511 running on a PC or mobile device. Microsoft Azure Active Directory (AD) accounts for your employees: - Admins need Azure AD accounts to sign up for Store for Business and Education, and then to sign in, get apps, distribute apps, and manage app licenses. You can sign up for Azure AD accounts as part of signing up for Store for Business and Education. - Employees need Azure AD account when they access Store for Business content from Windows devices. - If you use a management tool to distribute and manage online-licensed apps, all employees will need an Azure AD account - For offline-licensed apps, Azure AD accounts are not required for employees. - Admins can add or remove user accounts in the Microsoft 365 admin center, even if you don’t have an Office 365 subscription. You can access the Office 365 admin portal directly from the Store for Business and Education.. A couple of things to note about management tools: - Need to integrate with Windows 10 management framework and Azure AD. - Need to sync with the Store for Business inventory to distribute apps. How does the Store for Business and Education work? The first step for getting your organization started with Store for Business and Education is signing up. Sign up using an existing account (the same one you use for Office 365, Dynamics 365, Intune, Azure, etc.) or we’ll quickly create an account for you. You must be a Global Administrator for your organization. Set up After your admin signs up for the Store for Business and Education, they can assign roles to other employees in your company or school. The admin needs Azure AD User Admin permissions to assign Microsoft Store for Business and Education roles. These are the roles and their permissions. Note Currently, the Basic purchaser role is only available for schools using Microsoft Store for Education. For more information, see Microsoft Store for Education permissions. In some cases, admins will need to add Azure Active Directory (AD) accounts for their employees. For more information, see Manage user accounts and groups. Also, if your organization plans to use a management tool, you’ll need to configure your management tool to sync with Store for Business and Education. Get apps and content Once signed in to the Microsoft Store, you can browse and search for all products in the Store for Business and Education catalog. Some apps are free,and some apps charge a price. We're continuing to add more paid apps to the Store for Business and Education. Check back if you don't see the app that you're looking for. Currently, you can pay for apps with a credit card, and some items can be paid for with an invoice. We'll be adding more payment options over time. App types - These app types are supported in the Store for Business and Education: - Universal Windows Platform apps - Universal Windows apps, by device: Phone, Surface Hub, IOT devices, HoloLens Apps purchased from the Store for Business and Education only work on Windows 10 devices. Line-of-business (LOB) apps are also supported through Microsoft Store. You can invite IT developers or ISVs to be LOB publishers for your organization. This allows them to submit apps via the developer center that are only available to your organization through Store for Business and Education. These apps can be distributed using the distribution methods discussed in this topic. For more information, see Working with Line-of-Business apps. App licensing model Store for Business and Education supports two license options for apps: online and offline. Online licensing is the default licensing model and is similar to the licensing model for Microsoft Store. Online licensed apps require users and devices to connect to Microsoft Store services to acquire an app and its license. Offline licensing is a new licensing option for Windows 10. With offline licenses, organizations can cache apps and their licenses to deploy within their network. ISVs or devs can opt in their apps for offline licensing when they submit them to the developer center. For more information, see Apps in Microsoft Store for Business. Distribute apps and content App distribution is handled through two channels, either through the Microsoft Store for Business, or using a management tool. You can use either, or both distribution methods in your organization. Distribute with Store for Business and Education: - Email link – After purchasing an app, Admins can send employees a link in an email message. Employees can click the link to install the app. - Curate private store for all employees – A private store can include content you’ve purchased from Microsoft Store for Business, and your line-of-business apps that you’ve submitted to Microsoft Store for Business. Apps in your private store are available to all of your employees. They can browse the private store and install apps when needed. - To use the options above users must be signed in with an Azure AD account on a Windows 10 device. Licenses are assigned as individuals install apps. Using a management tool – For larger organizations that want a greater level of control over how apps are distributed and managed, a management tools provides other distribution options: - Scoped content distribution – Ability to scope content distribution to specific groups of employees. - Install apps for employees – Employees are not responsible for installing apps. Management tool installs apps for employees. Management tools can synchronize content that has been acquired in the Store for Business. If an offline application has been purchased this will also include the app package, license and metadata for the app (like, icons, count, or localized product descriptions). Using the metadata, management tools can enable portals or apps as a destination for employees to acquire apps. For more information, see Distribute apps to your employees from Microsoft Store for Business. Manage Microsoft Store for Business settings and content Once you are signed up with the Business store and have purchased apps, Admins can manage Store for Business settings and inventory. Manage Microsoft Store for Business settings - Assign and change roles for employees or groups - Device Guard signing - Register a management server to deploy and install content - Manage relationships with LOB publishers - Manage offline licenses - Update the name of your private store Manage inventory - Assign app licenses to employees - Reclaim and reassign app licenses - Manage app updates for all apps, or customize updates for each app. Online apps will automatically update from the Store. Offline apps can be updated using a management server. - Download apps for offline installs For more information, see Manage settings in the Store for Business and Manage apps. Supported markets Store for Business and Education is currently available in these markets. Support for free and paid products Support for free apps Customers in these markets can use Microsoft Store for Business and Education to acquire free apps: - Russia Support for free apps and Minecraft: Education Edition Customers in these markets can use Microsoft Store for Business and Education to acquire free apps and Minecraft: Education Edition: - Albania - Aremenia - Azerbaijan - Belarus - Bosnia - Brazil - Georgia - India - Isle of Man - Kazakhstan - Korea - Monaco - Republic of Moldova - Taiwan - Tajikistan - Ukraine Support to only manage products Customers in these markets can use Microsoft Store for Business and Education only to manage products that they've purchased from other channels. For example, they might have purchased products through Volume Licensing Service Center. However, they can't purchase apps directly from Microsoft Store for Business and Education. - Puerto Rico This table summarize what customers can purchase, depending on which Microsoft Store they are using. Note Microsoft Store for Education customers with support for free apps and Minecraft: Education Edition - Admins can acquire free apps from Microsoft Store for Education. - Admins need to use an invoice to purchase Minecraft: Education Edition. For more information, see Invoice payment option. - Teachers, or people with the Basic Purchaser role, can acquire free apps, but not Minecraft: Education Edition. Privacy notice Store for Business and Education services get names and email addresses of people in your organization from Azure Active Directory. This information is needed for these admin functions: - Granting and managing permissions - Managing app licenses - Distributing apps to people (names appear in a list that admins can select from) Store for Business and Education does not save names, or email addresses. Your use of Store for Business and Education is also governed by the Microsoft Store for Business and Education Services Agreement. Information sent to Store for Business and Education is subject to the Microsoft Privacy Statement. ISVs and Store for Business and Education Developers in your organization, or ISVs can create content specific to your organization. In Store for Business and Education, we call these line-of-business (LOB) apps, and the devs that create them are LOB publishers. The process looks like this: - Admin invites devs to be LOB publishers for your organization. These devs can be internal devs, or external ISVs. - LOB publishers accept the invitation, develop apps, and submits the app to the Windows Dev Center. LOB publishers use Enterprise associations when submitting the app to make the app exclusive to your organization. - Admin adds the app to Microsoft Store for Business or Microsoft Store for Education inventory. Once the app is in inventory, admins can choose how to distribute the app. ISVs creating apps through the dev center can make their apps available in Store for Business and Education. ISVs can opt-in their apps to make them available for offline licensing. Apps purchased in Store for Business and Education will work only on Windows 10. For more information on line-of-business apps, see Working with Line-of-Business apps.
https://docs.microsoft.com/en-us/microsoft-store/microsoft-store-for-business-overview
2021-04-10T14:35:12
CC-MAIN-2021-17
1618038057142.4
[]
docs.microsoft.com
Upgrade to version 8. - -: 8.1.0, 8.1.1, 8.1.2, 8.1.3 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/Splunk/8.1.2/Installation/UpgradeonWindows
2021-04-10T15:28:35
CC-MAIN-2021-17
1618038057142.4
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Getting Started with NoSQL Workbench To get started with NoSQL Workbench, on the Database Catalog page in NoSQL Workbench, choose Amazon Keyspaces, and then choose Launch. This opens the NoSQL Workbench home page for Amazon Keyspaces where you have the following options to get started: Create a new data model. Import an existing data model in JSON format. Open a recently edited data model. Open one of the available sample models. Each of the options opens the NoSQL Workbench data modeler. To continue creating a new data model, see Building New Data Models with NoSQL Workbench. To edit an existing data model, see Editing Existing Data Models with NoSQL Workbench.
https://docs.aws.amazon.com/keyspaces/latest/devguide/workbench.start.html
2021-04-10T15:39:30
CC-MAIN-2021-17
1618038057142.4
[array(['images/workbench/key_nosql_welcome.png', 'Console screenshot that shows the NoSQL Workbench start page.'], dtype=object) array(['images/workbench/key_nosql_datamodel.png', 'Console screenshot that shows the data modeler start page.'], dtype=object) ]
docs.aws.amazon.com
- Docs - Zix WordPress Theme - - Elementor Widgets/Sections - - FAQ with Tabs FAQ with Tabs Estimated reading : 1 minute The FAQs comes from the Faqs custom post type. Navigate to Dashboard > Faqs > Add New to add a new FAQ item. You have to add your all FAQs from there. See the below screenshot to see the FAQ content parts- Zix FAQ Creation Then place the FAQ with Tabs Elementor widget on your page with Elementor page builder. Still Stuck? We can help you. Create a Support Ticket
https://docs.droitthemes.com/docs/zix-wordpress-theme/elementor-widgets-sections/faq-with-tabs/
2021-04-10T14:05:43
CC-MAIN-2021-17
1618038057142.4
[array(['https://docs.droitthemes.com/wp-content/themes/ddoc/assets/images/Still_Stuck.png', 'Still_Stuck'], dtype=object) ]
docs.droitthemes.com
opencv-proto documentation¶ Description¶ Allows fast prototyping in Python for OpenCV Offers primitives and simplified interfaces to streamline prototypes construction in Python. Facilitates: - Windows construction and management - Trackbar construction - Configuration save/load (including trackbar values) - Key binding (e.g. for trackbar control, configuration save/load) - Video capturing and modification - Work with images - Work with text - Frames transformation Requirements¶ - Python 3.6+ opencv-python(or variants) Quick install with third-parties: $ pip install opencv-proto[all]
https://opencv-proto.readthedocs.io/en/latest/
2021-04-10T14:38:48
CC-MAIN-2021-17
1618038057142.4
[]
opencv-proto.readthedocs.io
You can save and store Ranger audits to Solr if you have installed and configured the Solr service in your cluster. save Ranger audits to Solr: From the Ambari dashboard, select the Ranger service. On the Configs tab, scroll down and select Advanced ranger-admin-site. Set the following property values: ranger.audit.source.type = solr ranger.audit.solr.urls = ranger.audit.solr.username = ranger_solr ranger.audit.solr.password = NONE Restart the Ranger service. http://{SOLR_HOST}:6083/solr/ranger_audits
https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.3.4/bk_Ranger_Install_Guide/content/audit_to_solr.html
2021-09-17T04:41:02
CC-MAIN-2021-39
1631780054023.35
[]
docs.cloudera.com
Important You are viewing documentation for an older version of Confluent Platform. For the latest, click here. Production Deployment¶ This section describes the key considerations before going to production with your cluster. However, it is not an exhaustive guide to running your Schema Registry in production.. Network¶ A fast and reliable network is obviously important to performance in a distributed system. Low latency helps ensure that nodes can communicate easily, while high bandwidth helps shard movement and recovery. Modern data-center networking (1 GbE, 10 GbE) is sufficient for the vast majority of clusters. Avoid clusters that span multiple data centers, even if the data centers are colocated in close proximity. Definitely avoid clusters that span large geographic distances. looks like this: -Xms1g -Xmx Important Configuration Options¶ The following.<a anchor=”listeners”></a> Schema Registry identities are stored in ZooKeeper and are made up of a hostname and port. If multiple listeners are configured, the first listener’s port is used for its identity. - Type: list - Default: “” - Importance: high host.name¶ The host name advertised in ZooKeeper. Make sure to set this if running Schema Registry with multiple nodes. - Type: string - Default: “192.168.50.1” -). The full set of configuration options are documented in Schema Registry Configuration Options. Don’t Modify These. Finally, kafkastore.topic must be a compacted topic to avoid data loss. Whenever in doubt, leave these settings alone. If you must create the topic manually, this is an example of proper configuration: # kafkastore.topic=_schemas $ bin/kafka-topics --create --zookeeper localhost:2181 --topic connect-configs - election by following below outlined steps. These steps would lead to Schema Registry not being available for writes for a brief amount of time. - Make above outlined config changes on that node and also ensure master.eligibilityis set to false in all the nodes - Do a rolling bounce of all the nodes. - Configure master.eligibilityto true on the nodes that can be master eligible and bounce them Backup and Restore¶ As discussed in Design Overview, all schemas, subject/version and ID metadata, and compatibility settings are appended as messages to a special Kafka topic <kafkastore.topic> (default _schemas). This topic is a common source of truth for schema IDs, and you should back it up. In case of some unexpected event that makes the topic inaccessible, you can restore this schemas topic from the backup, enabling consumers to continue to read Kafka messages that were sent in the Avro format. As a best practice, we recommend backing up the <kafkastore.topic>., use the kafka-console-consumer to capture messages from the schemas topic to a file called “schemas.log”. Save this file off the Kafka cluster. bin/kafka-console-consumer --bootstrap-server localhost:9092 --topic _schemas --from-beginning --property print.key=true --timeout-ms 1000 1> schemas.log To restore the topic, use the kafka-console-producer to write the contents of file “schemas.log” to a new schemas topic. This examples uses a new schemas topic name “_schemas_restore”. If you use a new topic name or use the old one (i.e. “_schemas”), make sure to set <kafkastore.topic> accordingly. bin/kafka-console-producer --broker-list localhost:9092 --topic _schemas_restore --property parse.key=true < schemas.log
https://docs.confluent.io/4.0.0/schema-registry/docs/deployment.html
2021-09-17T04:39:20
CC-MAIN-2021-39
1631780054023.35
[]
docs.confluent.io
#include <mitkPlanarFigureIO.h> Reads/Writes a PlanarFigure to a file Definition at line 31 of file mitkPlanarFigureIO.h. Definition at line 57 of file mitkPlanarFigureIO.h. Definition at line 34 of file mitkPlanarFigureIO.h. creates a TinyXML element that contains x, y, and z values Documentation Reads a number of mitk::PlanarFigures from the file system. Implements mitk::AbstractFileReader. parses the element for the attributes name0 to nameN, where "name" and the number of attributes to read are passed as argument. Returns a list of double vales. parses the element for the attributes x,y,z and returns a mitk::Point3D filled with these values parses the element for the attributes x,y,z and returns a mitk::Vector3D filled with these values.
https://docs.mitk.org/nightly/classmitk_1_1PlanarFigureIO.html
2021-09-17T04:12:25
CC-MAIN-2021-39
1631780054023.35
[]
docs.mitk.org
RSKcorrecthold.m Input -Required- RSK -Optional- channel: 'all' (all channels default). profile: [ ] (all profiles default). direction: up, down or both (default). action: nan (default) or interp. visualize: show plot with original, processed and flagged data on specified profile(s) Output RSK - holdpts : hold point indices The analog-to-digital (A2D) converter on RBR instruments must recalibrate periodically. In the time it takes for the calibration to finish, one or more samples are missed. The onboard firmware fills the missed sample with the same data measured during the previous sample, a simple technique called a zero-order hold. The function identifies zero-hold points by looking for where consecutive differences for each channel are equal to zero, and replaces them with an interpolated value or a NaN. An example of where zero-order holds are important is when computing the vertical profiling rate from pressure. Zero-order hold points produce spikes in the profiling rate at regular intervals, which can cause the points to be flagged by RSKremoveloops. Example: [rsk, holdpts] = RSKcorrecthold(rsk)
https://docs.rbr-global.com/rsktools/process/post-processors/rskcorrecthold-m
2021-09-17T03:20:19
CC-MAIN-2021-39
1631780054023.35
[]
docs.rbr-global.com
Old Confluence editor Display content in a step by step guide UI Step macros need to be contained by a UI Steps macro. Usage The expands macros consists of a solution that has an outer macro and inner ones. To insert the UI Step-macro into a page using the macro browser: - In the Confluence editor, choose Insert > Other Macros, choose UI Steps. - Find and select the required macro. - Inside the UI Steps-macro, choose Insert > Other Macros, choose UI Step. -.
https://docs.refined.com/display/RTCC/UI+Step+for+Cloud
2021-09-17T03:26:32
CC-MAIN-2021-39
1631780054023.35
[]
docs.refined.com
Controls overview Rokt provides a variety of controls that let partners decide how customers experience interact with Rokt. You have full control over what types of offers and advertisers are eligible to show on your site or app. Making changes to your controls settings it easy and can be done at any time.
https://docs.rokt.com/docs/user-guides/rokt-ecommerce/controls/overview
2021-09-17T04:45:23
CC-MAIN-2021-39
1631780054023.35
[]
docs.rokt.com
What's in the Release Notes These release notes cover the following topics: What's New The VMware vRealize Operations Management Pack for vSphere Replication 8.2.1.2 provides support for vRealize Operations Manager 8.3. Installation and Configuration The VMware vRealize Operations Management Pack for vSphere Replication 8.2.1.2 software is distributed as PAK file. You install and configure the management pack by using the vRealize Operations Manager interface. - Log in to vRealize Operations Manager. - Select Administration > Solutions > Repository in the left pane. - Click Add a Management Pack. - Navigate to the vrAdapterPak-8.2.1.x.pakfile and click Upload. When the PAK file is uploaded, click Next. - Read and agree to the end-user license agreement. Click Next to install the management pack. - Review the installation progress, and click Finish when the installation completes. After the installation completes you must configure the VMware vRealize Operations Management Pack for vSphere Replication 8.2 so that vRealize Operation Manager can collect data from the target system. The minimum role required to collect data is the VRM replication viewer. For more information on roles and permissions, see vSphere Replication Roles and Permissions. - Select Administration > Solutions > Configuration in the left pane. - On the Configuration tab, select VR Adapter, and click Configure on the toolbar below. - Enter the required information in the Manage Solutions wizard and click Save Settings. Resolved Issues - NEW An error occurs, when you try to pair the vSphere Replication Adapter with vRealize Operations If you are running vRealize Operations 8.1 and you try to pair the vSphere Replication adapter with vRealize Operations, the process fails with the following error: Server certificate chain is not trusted and thumbprint verification is not configured The issue is resolved in the vRealize Operations Management Pack for vSphere Replication 8.2.1.2.
https://docs.vmware.com/en/vSphere-Replication/8.2/rn/vr-vropsmp-releasenotes-8-2-1-2.html
2021-09-17T05:41:52
CC-MAIN-2021-39
1631780054023.35
[]
docs.vmware.com
xDEX is one of optimized ERC20 AMM DEX. Optimized Multi Asset AMM Dex Trustless and Permissionless. Accept any standard and non-deflating ERC20. None for Pre-sale, Pre-mint, Fair launch and Antifragile. xDEX token distributed 100% based on community consessus and participation. Perpetual Protocol With Shadow AMM xNsure provide AMM European call and put options by introducing AMM tools to non-prefessional option sellers as well as AMM liquidity providers. xHalfLife is Exponentially Decaying Money Stream Protocol. Any XDEX from xFarm voting pool, ordinary pool, and founder teams' fund, is rewarded through xHalfLife protocol. Withdrawable reward updated in every block. An automated market maker (AMM) is a system that provides liquidity to the exchange it operates in through automated trading. On AMM-based decentralized exchanges, the traditional order book is replaced by liquidity pools that are pre-funded on-chain for both assets of the trading pair. The liquidity is provided by other users who also earn passive income on their deposit through trading fees based on the percentage of the liquidity pool that they provide. Yield farming is a way to make more crypto with your crypto. It involves you lending your funds to others or provide liquidity to the market through the magic of computer programs called smart contracts. In return for your service, you earn fees in the form of crypto. In XDeFi, if you become a liquidity provider, you can get mining revenue. Total Value Locked (TVL). It measures how much crypto is locked in DeFi mining, lending and other types of money marketplaces. In some sense, TVL is the aggregate liquidity in liquidity pools.
https://docs.xdefi.com/en/faq
2021-09-17T05:15:56
CC-MAIN-2021-39
1631780054023.35
[]
docs.xdefi.com
Example Walkthrough: Simple Polytrope¶ As the first example of build_poly in action, let’s build a simple (i.e., single-region) \(n=3\) polytrope, that for instance describes the structure of a radiation-pressure dominated, fully convective star. Assembling a Namelist File¶ First, let’s assemble a namelist file containing the various parameters which control a build_poly run. Using a text editor, create the file build_poly.simple.in with the following content cut-and-pasted in: &poly n_poly = 3.0 ! Polytropic index of single region / &num dz = 1E-2 ! Radial spacing of points toler = 1E-10 ! Tolerance of integrator / &out file = 'poly.simple.h5' ! Name of output file / Detailed information on the namelist groups expected in build_poly input files can be found in the Input Files section. Here, let’s briefly narrate the parameters appearing in the file above: In the &polynamelist group, the n_polyparameter sets the polytropic index. To run build_poly, use the command $GYRE_DIR/bin/build_poly build_poly.simple.in There is no screen output produced during the run, but at the end the poly.simple.h5 will be written to disk. This file, which is in POLY format, can be used as the input stellar model in a GYRE calculation; but it can also be explored in Python (see Fig. 12) using the read_model function from PyGYRE.
https://gyre.readthedocs.io/en/latest/appendices/build-poly/example-simple.html
2021-09-17T05:04:59
CC-MAIN-2021-39
1631780054023.35
[]
gyre.readthedocs.io
Updating to Java 16 As of Minecraft 1.17, Paper will only support Java 16 and above. This is in line with our new policy of only supporting the last two long-term-support (LTS) versions of the Java runtime. Although Minecraft 1.17 requires a Java version that is too new for any LTS, we will maintain our policy as far as possible. We will only support Java 16 and above because of Mojang Studios deciding to bump their own Java requirement with 1.17. If you’re a developer or want to be on the safe side, pick the JDK options in the guides instead of a generic JRE. The JDK contains the JRE, in addition to development material (sources, documentation, a reference compiler, and more). To update, please find the appropriate header for you in the table of contents. Pterodactyl Note To switch the Java version on Pterodactyl, you will require an administrator account. Note The names of options will be different depending on the language you use. Assuming you are already logged in on your administrator account, open the administrator control panel, go to the Servers tab, click on your server (this has to be repeated for every server you wish to switch the Java version of), and press the Startup tab. Proceed by selecting ghcr.io/pterodactyl/yolks:java_16 from the Image dropdown under Docker Container Configuration. If you are running an older panel version, manually enter the image url in the custom image field. For Java 11, select it from the dropdown instead or replace 16 with 11. Debian/Ubuntu To install Java 16 on Debian, Ubuntu, and the plethora of other distributions based on these, execute the following commands to add the AdoptOpenJDK APT repository and to install AdoptOpenJDK Hotspot: $ sudo apt update $ sudo apt install apt-transport-https software-properties-common gnupg wget $ wget -qO - | sudo apt-key add - $ sudo add-apt-repository $ sudo apt update $ sudo apt install adoptopenjdk-16-hotspot You can also replace 16 with 11 for Java 11. RPM-Based To install Java 16 on CentOS, RHEL, Fedora, openSUSE, SLES and many other RPM-based distributions, execute the following commands to add Amazon Corretto’s RPM repository and install Java 16. $ sudo rpm --import $ sudo curl -Lo /etc/yum.repos.d/corretto.repo $ sudo dnf -y install java-16-amazon-corretto-devel $ sudo zypper addrepo $ sudo zypper install java-16-amazon-corretto-devel $ sudo rpm --import $ sudo curl -Lo /etc/yum.repos.d/corretto.repo $ sudo yum -y install java-16-amazon-corretto-devel Arch Linux To install Java 16 on Arch Linux, you will need to install the jre-openjdk package. $ sudo pacman -Syu jre-openjdk To switch between available Java versions on the system with the archlinux-java tool, see the wiki on Switching between JVMs. Linux (Generic) Note You should check with your distribution’s package manager(s) before using this section of the guide. It is very likely you can find a suitable Java version if you search its repositories for java, openjdk, and jre. SDKMAN! Install SDKs with ease! Wa-pow! Luckily SDKMAN! is written in bash, so you can use this on practically any Linux (and BSD!) environment. Follow the installation instructions on their website. You can then proceed to install one of their many Java distributions with the simple commands on their website. Adoptium Note This assumes an intermediate to advanced Linux user. Ask for help if you need it; we don’t want you to harm your system. #paper-help on Discord is a fitting channel for asking, and remember: don’t ask to ask, just ask. Note You are going to require the tar and sha256sum tools to do this install. First, select an appropriate tar.gz file from Adoptium’s website, and copy the download URL. Next, figure out which directory you want to install Java to; this is commonly a subdirectory within /usr/lib/jvm. The tar file you copied the URL to has an inner directory, so you don’t need to create one yourself. Download the file with one of the following commands: With curl: curl -LJO "replace this text with the URL" With wget: wget "replace this text with the URL" And get the signature from pressing the Checksum (SHA256) button next to the .tar.gz download button. This should be the same as displayed in the second column, output from running sha256sum "the downloaded file path goes here". If they are not the same, delete the files and re-download them. Next up, extract the file with: tar xzf "the downloaded file path goes here". There should now be a directory named something like jdk-16.0.1+1/. You can safely delete the tar.gz file if this is the case. Now you should add an environment variable called JAVA_HOME pointing to the directory you created (e.g. /usr/lib/jvm/jdk-16.0.1+1; note there is no trailing slash here): # cat <<EOF | tee /etc/profile.d/java.sh export JAVA_HOME=/usr/lib/jvm/jdk-16.0.1+1 export PATH=$JAVA_HOME/bin:"$PATH" EOF # chmod +x /etc/profile.d/java.sh Note The # at the start means this has to be run as either root, or an account that has access to the /etc/profile.d/ directory. To avoid this, you can replace tee with sudo tee (or doas tee on BSD), and replace chmod with sudo chmod (or doas chmod on BSD). You must now source the new file you created, which is usually done at the start of a shell, so you can just re-open the shell. Alternatively, run source /etc/profile.d/java.sh. Windows 10 If you’re on Windows 10, you will want Adoptium’s JRE. You can find the msi file you should install on their website. Remember to reboot your computer after installing. Checking version If you now open a new PowerShell prompt and do java -version, it should say something along the lines of: openjdk version "16.0.1" 2021-04-20 OpenJDK Runtime Environment Temurin-16.0.2+7 (build 16.0.2+7) OpenJDK 64-Bit Server VM Temurin-16.0.2+7 (build 16.0.2+7, mixed mode, sharing) It is the version "16.0.1" part that is important – if the first number is not 16, you need to modify your PATH. Modifying your PATH Press your Windows button and search (just start typing) environment variable. The Edit the system environment variables result is the one you want. Press the Environment Variables... button: Select the JAVA_HOME variable in the System variables section in the bottom half of the window and press Edit..., OR if the variable is not present, create a new variable with New... in the lower half of the window, and name it JAVA_HOME. You now want to Directory... and find the Java directory under C:\Program Files\Eclipse Foundation in the Windows Explorer window: Now go to your Path variable in the System variables section in the bottom half of the window and press Edit.... If there is already a %JAVA_HOME%\bin entry in the list, skip this step. Otherwise, press the New button at the top and enter %JAVA_HOME%\bin: If you now open a new PowerShell window, you should have the correct output. If not, restart your computer and try again. If it is still wrong, ask for help in #paper-help on Discord to get further assistance. macOS / OS X If you’re on macOS, you can use a tool called Homebrew to install Java. Follow the instructions on their website for how to install it. To now install Java, open your Terminal app and run the following command: $ brew install --cask temurin If you used AdoptOpenJDK previously, uninstall and untap it. $ brew uninstall adoptopenjdk16-jre $ brew untap AdoptOpenJDK/openjdk
https://paper.readthedocs.io/en/latest/java-update/index.html
2021-09-17T04:45:20
CC-MAIN-2021-39
1631780054023.35
[array(['../_images/pterodactyl-startup-tab.png', '../_images/pterodactyl-startup-tab.png'], dtype=object) array(['../_images/windows-env-var-search.png', '../_images/windows-env-var-search.png'], dtype=object) array(['../_images/windows-env-var-button.png', '../_images/windows-env-var-button.png'], dtype=object) array(['../_images/windows-browse-directory.png', '../_images/windows-browse-directory.png'], dtype=object) array(['../_images/windows-add-to-path.png', '../_images/windows-add-to-path.png'], dtype=object)]
paper.readthedocs.io
Using Regions and Availability Zones with Amazon EC2 These Go examples show you how to retrieve details about AWS Regions and Availability Zones.. The code in this example uses the AWS SDK for Go to perform these tasks by using these methods of the Amazon EC2 client class: You can download complete versions of these example files from the aws-doc-sdk-examples repository on GitHub. Scenario Amazon EC2 is hosted in multiple locations worldwide. These locations are composed of AWS Regions and Availability Zones. Each region is a separate geographic area with multiple, isolated locations known as Availability Zones. Amazon EC2 provides the ability to place instances and data in these multiple locations. In this example, you use Go code to retrieve details about regions and Availability Zones. The code uses the AWS SDK for Go tomanage instances by using the following methods of the Amazon EC2 client class: Prerequisites You have set up and configured the AWS SDK for Go. You are familiar with AWS Regions and Availability Zones. To learn more, see Regions and Availability Zones in the Amazon EC2 User Guide for Linux Instances or Regions and Availability Zones in the Amazon EC2 User Guide for Windows Instances. List the Groups This example describes the security groups by IDs that are passed in to the routine. It takes a space separated list of group IDs as input. To get started, create a new Go file named regions_and_availability.go. You must import the relevant Go and AWS SDK for Go packages by adding the following lines. package main import ( "fmt" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/ec2" ) In the main function, create a session with credentials from the shared credentials file, ~/.aws/credentials, and create a new EC2 client. func main() { // Load session from shared config sess := session.Must(session.NewSessionWithOptions(session.Options{ SharedConfigState: session.SharedConfigEnable, })) // Create new EC2 client svc := ec2.New(sess) Print out the list of regions that work with Amazon EC2 that are returned by calling DescribeRegions. resultRegions, err := svc.DescribeRegions(nil) if err != nil { fmt.Println("Error", err) return } Add a call that retrieves Availability Zones only for the region of the EC2 service object. resultAvalZones, err := svc.DescribeAvailabilityZones(nil) if err != nil { fmt.Println("Error", err) return } fmt.Println("Success", resultAvalZones.AvailabilityZones) } See the complete example on GitHub.
https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/ec2-example-regions-availability-zones.html
2018-11-12T20:22:29
CC-MAIN-2018-47
1542039741087.23
[]
docs.aws.amazon.com
Commerce v1 Product Catalog Minimal With a (very) minimal catalog, the products store in Commerce are the catalog. This gives you very little features in terms of providing additional information, but that's why it's called the minimal catalog. Pros - Product management is in a single place - Best for shops with only one or very few products. Cons - (Alpha) No snippets available yet to load product information into your frontend. - Only the basic fields (SKU, name, description, price, weight) are available per product, unless you add something else and connect the products manually. Setup For the minimal product catalog, you'll create your product records in Commerce > Products as standard products. You can use a TVs to select the product or enter the product ID directly. You can also use the get_products and get_product snippets with the minimal catalog to show product information.
https://docs.modmore.com/en/Commerce/v1/Product_Catalog/Minimal.html
2018-11-12T19:51:38
CC-MAIN-2018-47
1542039741087.23
[]
docs.modmore.com
You can access the Advanced Search by clicking Advanced Search on the catalogue homepage or search results screen. The available search options are the same as in Basic Search, but you may use one or more of them simultaneously. If you want to combine more than three search options, use the Add Search Row button to add more search input rows. Clicking the X button will close the search input row. The current search library is displayed in the Search Library box. The default search library is your library or borrowing zone. If your library system has multiple branches or a borrowing zone that includes multiple libraries you can use the Search, your default search library is Sitka or the library you have selected on the homepage. You may use the Search Library box to select a different library or regional libraries, or all libraries in Sitka to search. By default, the search results are in order of greatest to least relevance. See Order of Results in the section called “Search Methodology”. In the Sort Results box you may select to order the search results by relevance, title, author, publication date, or popularity. When the Limit to Available checkbox is checked search results are limited based on an item’s current circulation status. Titles without available items in the selected search library will not be displayed. Item statuses that show as available are: Available, On Display, Onsite Consultation, Reserves, and Reshelving. When the Group Formats and Editions checkbox is checked all formats and editions of the same title are grouped as one result. For example, the DVD and the first and second print editions of Harry Potter and the Chamber of Secrets will appear together. When the Exclude Electronic Resources checkbox is checked electronic resources are not included in the search results. You can filter your search by: Publication Year Shelving Location All the search filters, with the exception of Shelving Location, rely on values entered into the Leader, 007, or 008 fields of the MARC record. Records with incorrect coding will not filter correctly.
http://docs.libraries.coop/sitka/_advanced_search.html
2018-11-12T20:17:09
CC-MAIN-2018-47
1542039741087.23
[]
docs.libraries.coop
Table of Contents Patrons with email addresses in Evergreen receive pre-due email reminders three days before items are due. Evergreen also generates email notices for overdues and holds. Optional customized print overdue letters are also available. Two optional patron account notices are also available. There are three email addresses on the notices besides the recipient’s email: From, Reply-To and Errors-To. The From address has to be the BC Libraries Cooperative’s email address. The Reply-To and Errors-To addresses are from the Sending email addresses for patron notices setting in the Library Settings Editor. You must specify a address in this setting. This ensures patron replies are directed to the email of your choice. Bounced emails are also directed to this email address so staff can alert patrons when there is a problem with their email. Patrons can opt out of receiving overdue and courtesy notice emails in My Account under Notification Preferences. Library pre-due notices are generated and sent via email to patrons three days before an item is due. Only patrons with email addresses in Evergreen receive pre-due notices. These emails are not spam and should not be marked as spam by either patrons or staff. The pre-due notice template can be customized at the federation or library level. Please contact Co-op support for customization. Libraries can opt out of pre-due notices using the org.opt_out_email_predue setting in the Library Settings Editor. One checkout will receive only one pre-due notice. If the due date is extended via Edit Due Date after the pre-due notice is sent out, no new notice will be generated. Staff is encouraged to use Renew or Renew with Specific Due Date function instead of Edit Due Date to make sure a second notice will be generated in the due course.
http://docs.libraries.coop/sitka/admin-notice.html
2018-11-12T20:43:51
CC-MAIN-2018-47
1542039741087.23
[]
docs.libraries.coop
LongPollingReceiveMessage.php LongPollingReceiveMessage.php demonstrates how to retrieve a message from a specified queue with a 20 second delay to reduce the number of empty responses returned. <\Sqs\SqsClient; use Aws\Exception\AwsException; /** * Receive SQS Queue with Long Polling * * This code expects that you have AWS credentials set up per: * */ $queueUrl = "QUEUE_URL"; $client = new SqsClient([ 'profile' => 'default', 'region' => 'us-west-2', 'version' => '2012-11-05' ]); try { $result = $client->receiveMessage(array( 'AttributeNames' => ['SentTimestamp'], 'MaxNumberOfMessages' => 1, 'MessageAttributeNames' => ['All'], 'QueueUrl' => $queueUrl, // REQUIRED 'WaitTimeSeconds' => 20, )); var_dump($result); } catch (AwsException $e) { // output error message if fails error_log($e->getMessage()); } Sample Details Service: sqs Last tested: 2018-09-20 Author: jschwarzwalder (AWS) Type: full-example
https://docs.aws.amazon.com/code-samples/latest/catalog/php-sqs-LongPollingReceiveMessage.php.html
2018-11-12T20:58:39
CC-MAIN-2018-47
1542039741087.23
[]
docs.aws.amazon.com
The Customize Services step presents you with a set of tabs that let you review and modify your cluster setup. The Cluster Install wizard attempts to set reasonable defaults for each of the options. You are strongly encouraged to review these settings as your requirements might be more advanced. Ambari will group the commonly customized configuration elements together into four categories: Credentials, Databases, Directories, and Accounts. All other configuration will be available in the All Configurations section of the Installation Wizard Credentials Passwords for administrator and database accounts are grouped together for easy input. Depending on the services chosen, you will be prompted to input the required passwords for each item, and have the option to change the username used for administrator accounts Databases Some services require a backing database to function. For each service that has been chosen for install that requires a database, you will be asked to select which database should be used and configure the connectivity details for the selected database. Directories Choosing the right directories for data and log storage. Accounts The service account users and groups are also configurable from the Accounts, there are multiple options to choose how Ambari should handle user creation and modification: - Use Ambari to Manage Service Accounts and Groups Ambari will create the service accounts and groups that are required for each service if they do not exist in /etc/password, and in /etc/group of the Ambari Managed hosts. - Use Ambari to Manage Group Memberships Ambari will add or remove the service accounts from groups. - Use Ambari to Manage Service Accounts UID's Ambari will be able to change the UID’s of all service accounts. All Configurations Here you have an opportunity to review and revise the remaining configurations for your services. Browse through each configuration tab. Hovering your cursor over each of the properties, displays a brief description of what the property does. The number of service tabs shown here depends on the services you decided to install in your cluster. Any service with configuration issues that require attention will show up in the bell icon with the number properties that need attention. The bell popover contains configurations that require your attention, configurations that are highly recommended to be reviewed and changed, and configurations that will be automatically changed based on Ambari’s recommendations unless you choose to opt out of those changes. Required Configuration must be addressed in order to proceed on to the next step in the Wizard. Carefully review the required and recommended settings and address issues before proceeding After you complete Customizing Services, choose Next. Next Step More Information Using an existing or installing a default database Understanding service users and groups
https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.1.0/bk_ambari-installation/content/customize_services.html
2018-11-12T20:56:51
CC-MAIN-2018-47
1542039741087.23
[array(['figures/2/figures/amb_install_wiz_cstmz_svcs.png', None], dtype=object) array(['figures/2/figures/amb_install_wiz_cstmz_svcs_all_configs.png', None], dtype=object) ]
docs.hortonworks.com
Commerce v1 Modules Admin HideProducts When this module is enabled, the Products tab is removed from the merchant dashboard. This can make for a cleaner experience, in particular when your merchant uses a different catalog for managing the actual product information. This does not unregister the underlying pages, so the product-related pages can still be used when linked to directly.
https://docs.modmore.com/en/Commerce/v1/Modules/Admin/HideProducts.html
2018-11-12T20:22:55
CC-MAIN-2018-47
1542039741087.23
[]
docs.modmore.com
osmium-sort (1) NAME website
https://docs.osmcode.org/osmium/latest/osmium-sort.html
2018-11-12T20:37:32
CC-MAIN-2018-47
1542039741087.23
[]
docs.osmcode.org
Plugin Installation First, generate a shared secret key used to authenticate incoming requests, and encrypt the response body. The secret should be 32 bytes. $ openssl rand -hex 16 558f3eacbfd5928157cbfe34823ab921 The plugin is distributed as a Docker container and can be started with the below command. In the below example the plugin is mapped to port 3000. $ docker run \ --env=SECRET_KEY=558f3eacbfd5928157cbfe34823ab921 \ --publish=3000:3000 \ drone/amazon-secrets The agent should be configured to connect with the plugin. You should provide the shared secret and plugin endpoint. $ docker run \ --env=DRONE_SECRET_SECRET=558f3eacbfd5928157cbfe34823ab921 \ --env=DRONE_SECRET_ENDPOINT= \ --name=agent \ drone/agent
https://docs.drone.io/extend/secrets/amazon/install/
2018-11-12T21:50:38
CC-MAIN-2018-47
1542039741087.23
[]
docs.drone.io
Get an insight of the latest features and everything that is new in this version. Initiate your understanding of the key concepts and architecture of OBM. Use this section for information about administration tasks for OBM. Resolve problems that you might encounter while installing and using OBM. Additional Resources This section lists additional resources that should help you understand how to make the most out of the Operations Bridge Suite capabilities.
https://docs.microfocus.com/itom/Operations_Bridge_Manager:2018.05.001/Home
2018-11-12T20:41:39
CC-MAIN-2018-47
1542039741087.23
[]
docs.microfocus.com
. . Pollerd can generate the following Events in OpenNMS: The behavior to generate interfaceDown and nodeDown events is described in the Critical Service section. 1 deems all services on a interface to be Down if the critical service is Down. By default OpenNMS uses ICMP as the critical service. The following image shows, how a Critical Services is used to generate these events. .: . 3_10<<. Horizon. 3.<<".
http://docs.opennms.org/opennms/branches/jira-NMS-10437/guide-user/guide-user.html
2018-11-12T21:00:06
CC-MAIN-2018-47
1542039741087.23
[array(['images/service-assurance/01_node-model.png', '01 node model'], dtype=object) array(['images/service-assurance/02_service-assurance.png', '02 service assurance'], dtype=object) array(['images/service-assurance/03_node-outage-correlation.png', '03 node outage correlation'], dtype=object) array(['images/service-assurance/04_path-outage.png', '04 path outage'], dtype=object) array(['images/surveillance-view/01_surveillance-view.png', '01 surveillance view'], dtype=object) array(['images/surveillance-view/02_surveillance-view-config-ui.png', '02 surveillance view config ui'], dtype=object) array(['images/surveillance-view/03_surveillance-view-config-ui-edit.png', '03 surveillance view config ui edit'], dtype=object) array(['images/dashboard/01_dashboard-overall.png', '01 dashboard overall'], dtype=object) array(['images/dashboard/02_dashboard-surveillance-view.png', '02 dashboard surveillance view'], dtype=object) array(['images/dashboard/03_dashboard-alarms.png', '03 dashboard alarms'], dtype=object) array(['images/dashboard/04_dashboard-notifications.png', '04 dashboard notifications'], dtype=object) array(['images/dashboard/05_dashboard-outages.png', '05 dashboard outages'], dtype=object) array(['images/dashboard/06_dashboard-resource-graphs.png', '06 dashboard resource graphs'], dtype=object) array(['images/dashboard/07_dashboard-add-user.png', '07 dashboard add user'], dtype=object) array(['images/dashboard/08_dashboard-user-roles.png', '08 dashboard user roles'], dtype=object) array(['images/bsm/01_bsm-example-scenario.png', '01 bsm example scenario'], dtype=object) array(['images/bsm/02_bsm-service-hierarchy.png', '02 bsm service hierarchy'], dtype=object) array(['images/bsm/03_bsm-rca-action.png', '03 bsm rca action'], dtype=object) array(['images/bsm/04_bsm-rca-results.png', '04 bsm rca results'], dtype=object) array(['images/bsm/05_bsm-ia-action.png', '05 bsm ia action'], dtype=object) array(['images/bsm/06_bsm-ia-results.png', '06 bsm ia results'], dtype=object) array(['images/bsm/07_bsm-simulation.png', 'Simulation Mode'], dtype=object) array(['images/bsm/09_bsm-change-icon.png', '09 bsm change icon'], dtype=object) array(['images/alarms/01_alarm-notes.png', '01 alarm notes'], dtype=object)]
docs.opennms.org
Troubleshooting distribution list issues This topic discusses how to solve distribution list issues that you may run into when using Office 365. There could be a couple of issues here: It usually takes about 60 minutes for distribution list to be fully created and ready for management. Make sure you've waited the appropriate amount of time and try sending the email again. Sometimes, people create an Office 365 Group instead of a distribution list. Check out your distribution list in the admin center and make sure you created a distribution list. External members not getting emails sent to distribution list There could be a couple of issues here: Make sure you've allowed people outside your organization to send emails to the distribution. From the admin center, find the distribution list that you want to allow external people on and turn the toggle to On. External members don't receive email messages that are sent to a distribution list they're a member of, and the senders don't receive non-delivery message about the email. Read External members don't receive email... for steps on how to fix this issue. I'm an admin and I can't edit a distribution list in the admin center Make sure you have an Office 365 license. You need an Office 365 for business license before you can edit distribution lists in the admin center. Read Assign licenses to users in Office 365 for business for the steps.
https://docs.microsoft.com/en-us/office365/admin/troubleshoot-issues-for-admins/distribution-list-issues?redirectSourcePath=%252fpl-pl%252farticle%252frozwi%2525C4%252585zywanie-problem%2525C3%2525B3w-z-listy-dystrybucyjnej-79e180b1-6290-48ff-8003-fb3e6a3f70af&view=o365-worldwide
2018-11-12T21:00:36
CC-MAIN-2018-47
1542039741087.23
[]
docs.microsoft.com
Managing a Peer or Server Cache Managing();
http://gemfire81.docs.pivotal.io/docs-gemfire/basic_config/the_cache/managing_a_peer_server_cache.html
2018-11-12T20:10:35
CC-MAIN-2018-47
1542039741087.23
[]
gemfire81.docs.pivotal.io
To access iSCSI targets, your ESXi host uses iSCSI initiators. The initiator is a software or hardware installed on your ESXi host. The iSCSI initiator originates communication between your host and an external iSCSI storage system and sends data to the storage system. In the ESXi environment, iSCSI adapters configured on your host play the role of initiators. ESXi supports several types of iSCSI adapters. can connect to the iSCSI storage device through standard network adapters. The software iSCSI adapter handles iSCSI processing while communicating with the network adapter. With the software iSCSI adapter, you can use iSCSI technology without purchasing specialized hardware. Hardware iSCSI Adapter A hardware iSCSI adapter is a third-party adapter that offloads iSCSI and network processing from your host. Hardware iSCSI adapters are divided into categories.. Typically, an independent adapter is the QLogic QLA4052 adapter. Hardware iSCSI adapters might need to be licensed. Otherwise, they might not appear in the client or vSphere CLI. Contact your vendor for licensing information.
https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.storage.doc/GUID-7A4E3767-CB54-4E88-9BA8-298876119465.html
2018-11-12T21:45:01
CC-MAIN-2018-47
1542039741087.23
[]
docs.vmware.com
numpy.partition¶ numpy. partition(a, kth, axis=-1, kind='introselect', order=None)[source]¶ Return a partitioned copy of an array. Creates a copy of the array with its elements rearranged in such a way that the value of the element in k-th position is in the position it would be in a sorted array. All elements smaller than the k-th element are moved before this element and all equal or greater are moved behind it. The ordering of the elements in the two partitions is undefined. New in version 1.8.0. See also ndarray.partition - Method to sort an array in-place. argpartition - Indirect partition. sort - Full sorting Notes The various selection algorithms are characterized by their average speed, worst case performance, work space size, and whether they are stable. A stable sort keeps items with the same key in the same relative order. The available algorithms have the following properties: All the partition algorithms make temporary copies of the data when partitioning along any but the last axis. Consequently, partitioning along the last axis is faster and uses less space than partitioning along any other axis. The sort order for complex numbers is lexicographic. If both the real and imaginary parts are non-nan then the order is determined by the real parts except when they are equal, in which case the order is determined by the imaginary parts. Examples >>> a = np.array([3, 4, 2, 1]) >>> np.partition(a, 3) array([2, 1, 3, 4]) >>> np.partition(a, (1, 3)) array([1, 2, 3, 4])
https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.partition.html
2018-11-12T19:38:05
CC-MAIN-2018-47
1542039741087.23
[]
docs.scipy.org
numpy.random.binomial¶ numpy.random. binomial(n, p, size=None)¶ Draw scipy.stats.binom - probability density function, distribution or cumulative density function, etc. Notes The probability density for the binomial distribution is where is the number of trials, is the probability of success,%.
https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.binomial.html
2018-11-12T19:39:44
CC-MAIN-2018-47
1542039741087.23
[]
docs.scipy.org
Version Control Integration Our version control integration allows you to hook up your repository to a test case on the Testable platform. Currently this is only supported when creating Webdriver.io Selenium scenarios. To manage your repositories go to Account => Version Control. New Version Control Root Add integration to a new repository. - Name: Give it a name that can be used as a label/reference within Testable - Type: Currently only git repositories are supported - Url: The URL to the git repository. Supports http(s) as well as SSH URL formats. - Default Branch: When cloning the repository onto the test runner, the default branch will be used unless overridden in a particular test scenario. - Authentication: Public repositories can be cloned with no auth. For private ones both username/password and SSH authentcation are supported. Once you have filled in all the details, try out the connection using the Test Connection button. Note: if you run your tests on premises you can connect to on premises Git repositories that may not be available externally. However the Test Connection feature will not work in this case as it will attempt to connect to your repository via the Testable central servers. Authentication Both username and SSH style authentication are supported. Username To authenticate via username and password you must use an HTTP(S) git url. Simply add your username/password to the repository configuration on Testable and test the connection to see if it worked. SSH To authenticate via SSH use an SSH style url (e.g. [email protected]:testable/wdio-testable-example.git). Upload your private key to the repository configuration on Testable and test the connection to see if it worked. If using GitHub see their SSH documentation for more details on the setup required on their side. Edit/Delete Repository Click on a row in the version control repository list and select Edit or Delete from the dropdown in the upper right of the Account => Version Control page. Usage Within Testable Currently only Webdriver.io Selenium scenarios support version control integration. In the future this integration will be much more extensive, stay tuned!
https://docs.testable.io/guides/vcs.html
2018-11-12T20:48:23
CC-MAIN-2018-47
1542039741087.23
[array(['/images/documentation/vcs-list.png', 'Manage Manage'], dtype=object) array(['/images/documentation/vcs-add.png', 'Add Add'], dtype=object) array(['/images/documentation/vcs-username.png', 'Usernme Username'], dtype=object) array(['/images/documentation/vcs-ssh.png', 'SSH SSH'], dtype=object) array(['/images/documentation/vcs-wdio.png', 'Wdio Wdio'], dtype=object)]
docs.testable.io
Alfresco Outlook Integration allows you to use email and repository management without having to leave Microsoft Outlook. Version 1 of Alfresco Outlook Integrationhas limited support only and has a reduced set of features. For example; advanced metadata support, sorting, paging and extended search capabilities are available in later Outlook Integration versions only. For information about using Alfresco Outlook Integration, see Using Alfresco from Microsoft Outlook [1]. For information about installing and configuring Alfresco Outlook Integration, see Installing and configuring Alfresco Outlook Integration [2]. The. Click Open to open a new Outlook window where you can view the email. Set automatic archiving using the Alfresco Outlook Client, or archive emails directly. Alternatively, click Search to browse to find a folder and site. See Adding metadata to email during archiving in Outlook [14] for more information about adding metadata. When you archive an email in Outlook, the email is saved in Alfresco. If archiving is configured to allow metadata entry, a dialog appears before transfer of the email to Alfresco. In Outlook, use the Alfresco sidebar to browse and work with your connected Alfresco repository. This option displays a new window on the right side of the screen, called Alfresco Outlook Client. The options available to you are shown in the right-click context menu. See Configuring extended settings in Outlook [15] for more information on configuration settings. Set preferences, drag and drop, search, start workflows, and use the context menu in the Alfresco sidebar. If you can't see the Alfresco sidebar you might not have it configured. See step 4 of Configuring extended settings in Outlook [15] for information about displaying the Alfresco sidebar. Favourites is a combination of favorite documents and folders. The scope is repeated below the search box, for example, if the scope is set to Repository, the text below the search box reads Searching in Repository. For a simple search, type directly in the search box, where it says Enter your search criteria. For an advanced search, click the icon and enter your chosen criteria. The simple search automatically searches the names, titles, descriptions, content and tags of the stored documents and emails. The use of a wildcard (*) is not necessary. The search works in the same way as the search in Alfresco Share. The advanced search offers the option to search within file names, email contents and subject lines as well as specific timeframes. If you hover over the title of an item in the returned search results, a preview of the document is loaded in Outlook. You can start and view workflows from the Alfresco sidebar, following rules that are set in Alfresco Share. Upcoming appointments and tasks are shown in the left pane. View your archived emails in Alfresco, just like any other files in Alfresco. Email filters allow you to search for the archived emails in a site or across Alfresco. In the simple Alfresco view, view the properties of each archived email. In the detailed Alfresco view, HTML and rich text emails, and attachments are displayed as a color preview. In the Document Actions panel, attach a file to a new email as a link by selecting the Email as link option. Also, if a MSG file is saved, open it using the MSG file button in the preview. All other options remain available. Use the advanced Alfresco search to find archived emails by using the option Look for: Emails from the Advanced Search toolbar. You can drag and drop emails in and out of the repository, and add metadata automatically when an email is filed. Other features include leveraging Alfresco's in-built workflow processing and filtered search capabilities. You can apply a sorted view to the Alfresco repository (from within Microsoft Outlook), and page through a folder or site if it contains a large number of files. This information helps system administrators to install, configure and manage Alfresco Outlook Integration. You require one of each of the following components: Version 1 of Alfresco Outlook Integration is no longer available for download. You need to install a later version. Check the output from the script to ensure that the three AMP files have installed successfully. In the left Tools panel, scroll down and under Email Client there are the following options for configuration: In the left Tools panel, scroll down and under Email Client there are the following options for configuration: Alternatively, specify the client license in Microsoft Outlook. The server license status, number of current users, maximum users, product version and other information is displayed. Check that the license status is valid. The Alfresco Outlook Client installer checks whether the required components already exist on the system. The required files are installed and the Alfresco Outlook Client installer wizard opens. Alternatively, accept the default path specified. Settings are stored in the default user profile folder if you do not specify a folder. There is a new Alfresco Client tab on the toolbar. Click this tab to see new options for configuring the Alfresco Outlook Client. If you did not enter a client license key in Alfresco Share, you must enter one when you open Microsoft Outlook. Navigate to Alfresco Client > Configure > License to enter your key. Configure specific client settings in Microsoft Outlook using the Alfresco Client toolbar tab and server settings in Alfresco Share by using the Share Admin Tools menu. On the Microsoft Outlook toolbar, there is an Alfresco Client tab, with the following entries: You can configure email integration settings for Alfresco Outlook Integration using Share Admin Tools. These settings define global controls across your enterprise. In the left Tools panel, scroll down and under Email Client there are the following options for configuration: Options are All public sites, My sites or Favorite sites. The Module version field displays the version of the Alfresco Outlook Client. If you select this option, the Allow overwriting and Configuration XML content fields are active. If you check Allow overwriting, users are able to change their settings locally. Paste the XML code that contains the configuration settings for the Alfresco Outlook Client into the Configuration XML content field, or load and edit the default configuration template by clicking Load default configuration template. For more information about configuration templates, see Alfresco Outlook Integration configuration templates [31]. If Enable attachment stripping is enabled, the Target site field becomes mandatory (in order that the files are stored in the designated repository). Only one site can be specified in this field. Wildcard characters cannot be used in these fields, and if selected, they cannot be left blank. You can view and edit other settings for Alfresco Outlook Integration using Share Admin Tools. These settings define global controls across your enterprise. In the left Tools panel, scroll down and under Email Client there are the following options for configuration: In the list of access tokens, there is information about logged in users. See Installing server and client licenses in Alfresco Share [32] for more information about installing licenses. Configure Microsoft Outlook to find and connect to the correct Alfresco server. Type only the information before /share. For example, address or server name:port number If you select standard authentication, enter your Alfresco user name and password. If you select Windows authentication, the passthru authentication is used. For more information about authentication subsystem types, see Authentication subsystem types [33]. You can configure Microsoft Outlook to archive email in Alfresco, including archiving emails as links. The Configure Alfresco Outlook Client screen is displayed with tabs for Connection, Email Archiving, Extended, Configuration, and License. Archive as link saves the email (the original EML file) in Alfresco and a partial email (without text and attachments) in Microsoft Outlook. Configure Outlook extended settings; for example, you can change the display language, Alfresco settings, or drag and drop priorities. The Configure Alfresco Outlook Client screen is displayed with tabs for Connection, Email Archiving, Extended, Configuration, and License. Set the configuration template to import when the configuration dialog is called for the first time. The Configure Alfresco Outlook Client screen is displayed with tabs for Connection, Email Archiving, Extended, Configuration, and License. You are specifying whether the old configuration should be replaced or added to (extended). If you choose to extend the existing configuration, you will be prompted each time that there is a duplicate configuration setting to confirm that you want the setting to be overwritten. Click Apply central settings if you want to apply settings that have been defined in the Alfresco Share Admin Tools > Email Client > Email Integration Settings > Auto configure all clients section. For more information, see Configuring email integration settings in Alfresco Share [35]. <?xml version=“1.0“ encoding=“utf-8“?> <settings> <general isSitesRoot="true“ <server url=““ alfrescoUrl=“alfresco“ shareUrl=“share“ webApp=“0“ user=“test“ password=““ ntlm=“false“/> <logging maxFileSize="0" minLevel="info" /> <storage storeFiles=“false“ metadata=“false“ storeMsg=“true“ storeLink=“false“></storage> <explorer-search-properties> <search-property> <name>cm:name</name> <label-en>Name</label-en> <label-de>Name</label-de> <type>string</type> </search-property> </explorer-search-properties> <search-properties> <search-property> <name>cm:name</name> <label-en>Name</label-en> <label-de>Name</label-de> <type>string</type> </search-property> </search-properties> </settings> Use this information to troubleshoot problems with Alfresco Outlook Integration. You have exceeded the maximum number of users. Contact your support representative. 04140048 LICENSE_USERS_EXCEEDED To avoid this problem, specify a different user for each Alfresco Outlook Client installation. Information. While. Links: [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35]
http://docs.alfresco.com/print/book/export/html/903538
2018-11-12T20:01:34
CC-MAIN-2018-47
1542039741087.23
[]
docs.alfresco.com
Contents. 本章包含必要信息,每个 Ceph 开发者都应该知道。 Ceph 项目是由 Sage Weil 领导的。另外,各主要项目组件有自己的领导,下面的表格罗列了所有领导、以及他们在 GitHub 上的昵称。 上述表格里的 Ceph 专有缩写在下面的体系架构一节解释。 Ceph 是自由软件。 Unless stated otherwise, the Ceph source code is distributed under the terms of the LGPL2.1. For full details, see the file COPYING in the top-level directory of the source-code tree.:. Although GitHub is used for code, Ceph-related issues (Bugs, Features, Backports, Documentation, etc.) are tracked at, which is powered by Redmine. The tracker has a Ceph project with a number of subprojects loosely corresponding to the project components listed in 体系架构. Mere registration in the tracker automatically grants permissions sufficient to open new issues and comment on existing ones. 要报告软件缺陷或者提议新功能,请跳转到 Ceph 项目并点击 New issue 。 Ceph 的开发邮件讨论是通过邮件列表 [email protected] 进行的。这个邮件列表对所有人开放,把下面这行发送到 [email protected] 即可订阅: subscribe ceph-devel 要作为邮件正文发出。. Without bugs, there would be no software, and without software, there would be no software developers. —Unknown 没有缺陷,就不会有软件;没有软件,就不会有软件开发者。 ——无名 前面已经介绍了问题跟踪器和源代码仓库,也提及了补丁的提交,现在我们再详细解释一下它们在基本的 Ceph 开发流程里如何运作。. The Ceph source code is maintained in the ceph/ceph repository on GitHub. The GitHub web interface provides a key feature for contributing code to the project: the pull request. Newcomers who are uncertain how to use pull requests may read this GitHub pull request tutorial. For some ideas on what constitutes a “good” pull request, see the Git 提交的优良做法 article at the OpenStack 项目百科. Ceph is a collection of components built on top of RADOS and provide services (RBD, RGW, CephFS) and APIs (S3, Swift, POSIX) for the user to store and retrieve data. See 体系结构 for an overview of Ceph architecture. The following sections treat each of the major architectural components in more detail, with links to code and tests. RADOS stands for “Reliable, Autonomic Distributed Object Store”. In a Ceph cluster, all data are stored in objects, and RADOS is the component responsible for that. RADOS itself can be further broken down into Monitors, Object Storage Daemons (OSDs), and client APIs (librados). Monitors and OSDs are introduced at Ceph 简介. The client library is explained at Ceph 存储集群 API. RGW stands for RADOS Gateway. Using the embedded HTTP server civetweb, RGW provides a REST interface to RADOS objects. A more thorough introduction to RGW can be found at Ceph 对象网关. RBD stands for RADOS Block Device. It enables a Ceph cluster to store disk images, and includes in-kernel code enabling RBD images to be mounted. To delve further into RBD, see Ceph 块设备.
http://docs.ceph.org.cn/dev/
2018-11-12T20:15:09
CC-MAIN-2018-47
1542039741087.23
[]
docs.ceph.org.cn
Actions, Resources, and Condition Keys for AWS Performance Insights AWS Performance Insights (service prefix: pi) provides the following service-specific resources, actions, and condition context keys for use in IAM permission policies. Topics Actions Defined by AWS Performance Insights Performance Insights Performance Insights Performance Insights has no service-specific context keys that can be used in the Condition element of policy statements. For the list of the global context keys that are available to all services, see Available Keys for Conditions in the IAM Policy Reference.
https://docs.aws.amazon.com/IAM/latest/UserGuide/list_awsperformanceinsights.html
2018-11-12T20:19:46
CC-MAIN-2018-47
1542039741087.23
[]
docs.aws.amazon.com
Release Notes Software version: This document is an overview of the changes made to Operations Bridge Analytics (OBA) . What's new in OBA Anomaly detection: OBA is enhanced with a new algorithm to detect the deviation in the collected data and display anomalies based on sensitivity for the entities available in the environment. The OBA release focuses on quality, scalability, and performance. For a list of list of defect fixes and enhancements, see Fixed defects. Fixed defects OBA includes the following fixed issues. The reference number for each fixed defect is the Change Request (QCCR) number. For more information about fixed defects, visit Software Support Online, or contact your support representative directly. Known issues, limitations, and workarounds Problems and limitations are identified with a change request (QCCR) number. For more information about known problems, visit, or contact your Support representative directly. Title: Upgrade from the version 3.01 to 3.04 (version 3.01 to 3.02 then to 3.03 and finally upgraded to 3.04) fails because of old Vertica client (QCCR8D103500). Description: The database updates are missing when OBA is upgraded from the older versions such as 3.01, 3.02 to 3.04. Workaround: If you upgrade from an older version (3.01, 3.02) of OBA to 3.04, then run the following step after extracting the packages and before starting the upgrade: Install the new vertica client driver on all machines (server and collectors) - rpm -U Analytics_Installation/packages/vertica-client-8.1.1-0.x86_64.rpm. Title: Anamoly events are not available in OBM UI ( QCCR8D103475). Description: Anomalies detected in OBA are not available in OBM UI ( Event browser) Workaround: None. Title: Anomalies are displayed wrongly if the timestamp of the incoming breaches are not in chronological order (QCCR8D103360). Description: If the timestamps of the incoming breaches are not in chronological order, anomalies are not displayed correctly. Workaround: None. Title: After configuring OBA for SSL - Health overview is not working anymore (QCCR8D99651). Description: After configuring OBA for SSL, Health overview does not work. Workaround: None. Title: Downgrade to 3.03 from 03.04.140 fails (Some processes are not running) (QCCR8D103539). Description: After downgrading from 03.04.140 to 03.03 some of the opsa processes are not running on server and collector. Workaround: None. Title: OBA 3.04 Upgrade: OMi Events dashboard moved from category "Predefined" to "Shared" after upgrade (QCCR8D103385). Description: After upgrading OBA from 3.03 with consolidated hotfix2 to OBA 3.04 in the E2E Exelon environment, the dashboard "OMi Events" is listed in the "Shared" category instead of "Predefined" category. Workaround: To resolve this issue, follow these steps: - As a user opsa, run the following commands on the OBA server: [opsa@oba bin]$ cd /opt/HP/opsa [opsa@oba bin]$ ./opsa-dashboard-manager.sh -u opsatenantadmin -p <password> -i -f opt/HP/opsa/inventory/lib/hp/dashboard/default.xml -o - Delete the OMi Events dashboard from OBA 3.03: Title: Log search for terms containing minor separators don't not work (QCCR8D64562). Description: Searching with special characters does not work. For example, a query for #344 does not match rows like #344:. Workaround: Only search using numbers and characters, not special characters. Title: OBA UI requests XQL suggestions (QCCR8D94989). Description: The OBA UI requests suggestions for XQL even if you are not in the log search UI. This request generates unnecessary load on the Vertica database. Title: After installation there are no metrics displayed in the "Health Overview" dashboard (QCCR8D96577). Description: After the installation of OBA 3.03, no metrics are displayed in the "Health Overview" dashboard. Workaround: Restart the OBA collector after installing OBA 3.03. Title: After installation there are no indices on column: opsa_default.opsa_collection_message.message (QCCR8D96576). Description: After the installation of OBA 3.03, the text search does not work. Workaround: Restart the OBA application server after installing OBA 3.03. Title: OPSA flume.conf wipes its previous configuration state on adding new collection (QCCR8D94638). Description: In an environment with multiple OBA application servers, the servers overwrite each other's collector configurations. Workaround: Configure the collections with only one server. Title: The events in Splunk are only partly imported into OBA (QCCR8D99344). Description: The events in Splunk cannot be completely imported into OBA. Each message imported into OBA only contains part of the content. Workaround: This is due to the default regular expression OBA uses to parse the Splunk data. You can modify the regular expression in the Splunk Source Type Mapping UI so that it fits the data coming in from Splunk. Title: While upgrading from OBA 3.02 to 3.03, the error occurs due to wrong standalone.xml file ( QCCR8D100531) Description: The following error appears - Error occurred while configuring SSL: null Workaround: None
https://docs.microfocus.com/itom/Operations_Bridge_Analytics:3.04/ReleaseNotes
2018-11-12T20:28:58
CC-MAIN-2018-47
1542039741087.23
[]
docs.microfocus.com
Set up directory synchronization for Office 365 Office 365. Office 365 directory synchronization You can either use synchronized identity or federated identity between your on-premises organization and Office 365. With synchronized identity, you manage your users on-premises, and they are authenticated by Azure AD when they use the same password in the cloud as on-premises. This is the most common directory synchronization scenario. Pass-through authentication or Federated identity, allows you to manage your users on-premises and they are authenticated by your on-premises directory. Federated identity requires additional configuration and enables your users to only sign in once. For details, read Understanding Office 365 Identity and Azure Active Directory. Want to upgrade from Windows Azure Active Directory sync (DirSync) to Azure Active Directory Connect? If you are currently using DirSync and want to upgrade, head over to azure.com for upgrade instructions. Prerequisites for Azure AD Connect You get a free subscription to Azure AD with your Office 365 subscription. When you set up directory synchronization, you will install Azure Active Directory Connect on one of your on-premises servers. For Office 365 you will need to: - Verify your on-premises domain (the procedure will guide you through this). - Have Assign admin roles in Office 365 for business permissions for your Office 365 tenant and on-premises Active Directory. For your on-premises server on which you install Azure AD Connect you will need the following software: Note If you're using Azure Active Directory DirSync, the maximum number of distribution group members that you can synchronize from your on-premises Active Directory to Azure Active Directory is 15,000. For Azure AD Connect, that number is 50,000. To more carefully review hardware, software, account and permissions requirements, SSL certificate requirements, and object limits for Azure AD Connect, read Prerequisites for Azure Active Directory Connect. You can also review the Azure AD Connect version release history to see what is included and fixed in each release. To set up directory synchronization Sign in to the Office 365 admin center and choose Users > Active Users on the left navigation. In the Office 365 admin center, on the Active users page, choose ** More ** > Directory synchronization. On the ** Is directory sync right for you? ** page, the two first choices of 1-10, and 11-50 result in "Based on the size of your organization, we recommend that you create and manage users in the cloud. Using directory synchronization will make your setup more complex. Go to Active users to add your users." You can still, however, continue setting up directory synchronization by choosing Continue here on the bottom of the page. If you select the two latter choices, 51-250 or 251 or greater, the synchronization setup will recommend directory synchronization. Choose Next to continue. On the Sync your local directory with the cloud, read the information, and if you want more information, choose the learn more link that goes to: Prepare to provision users through directory synchronization to Office 365, and then choose Next. On the Let's check your directory page, review the requirements for automatically checking your directory. If you meet the requirements, choose Next > Start scan. If you can't meet the requirements you can still continue by choosing continue manually. If you select to scan your directories, choose Start scan on the Evaluating directory synchronization setup page. Follow the instructions to download and run the scan. Once the scan is complete, return to the setup wizard, and choose Next to see your scan results. Verify your domains as instructed on the Verify Ownership of your domains page. For detailed instructions, see Create DNS records for Office 365 when you manage your DNS records. Important After you have added a TXT record to verify you own your domain, do not go to the next step of adding users in the domains wizard. The directory synchronization will add users for you. Return to the Office 365 Setup page and choose Refresh On the Your domains are ready page, choose Next. On the Clean up your environment page, optionally follow the instructions to download IDFix to check your Active Directory. Choose Next to continue. On the ** Run Azure Active Directory Connect ** page, choose Download to install Azure AD Connect wizard. Note At this point you will be in the Azure AD Connect wizard. Make sure you leave the directory synchronization wizard page you were last on open in your browser, so you can return to it after the Azure AD Connect steps are done. After Azure AD Connect wizard has installed it will automatically open. You can also open it from your desktop, the default install site. Follow the wizard instructions depending on your scenario: For directory synchronization with password hash synchronization, use Azure AD Connect with express settings. For multiple forests, pass-through authentication, federated identity and SSO options, use Custom Installation of Azure AD Connect. Select Customize on the Express Settings page to use these options. After the Azure AD Connect wizard is done, return to the Office 365 Setup wizard, and follow the instructions on the Make sure sync worked as expected page. Choose Next to continue. Read the instructions on the ** Activate users ** page and then choose Next. Choose Finish on the You're all setup page. Assign licences to synchronized users After you have synchronized your users to Office 365, they are created but you need to assign licenses to them so they can use Office 365 features, such as mail. For instructions, see Assign licenses to users in Office 365 for business. Finish setting up domains Follow the steps in Create DNS records for Office 365 when you manage your DNS records to finish setting up your domains.
https://docs.microsoft.com/en-us/office365/enterprise/set-up-directory-synchronization?redirectSourcePath=%252fid-id%252farticle%252fMenyiapkan-sinkronisasi-direktori-untuk-Office-365-1B3B5318-6977-42ED-B5C7-96FA74B08846
2018-11-12T20:53:14
CC-MAIN-2018-47
1542039741087.23
[]
docs.microsoft.com
Release notes for Gluster 3.12.8 This is a bugfix release. The release notes for 3.12.0, 3.12.1, 3.12.2, 3.12.3, 3.12.4, 3.12.5, 3.12.6, 3.12.7 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.12 stable release. Bugs addressed A total of 9 patches have been merged, addressing 9 bugs - #1543708: glusterd fails to attach brick during restart of the node - #1546627: Syntactical errors in hook scripts for managing SELinux context on bricks - #1549473: possible memleak in glusterfsd process with brick multiplexing on - #1555161: [Rebalance] ENOSPC errors on few files in rebalance logs - #1555201: After a replace brick command, self-heal takes some time to start healing files on disperse volumes - #1558352: [EC] Read performance of EC volume exported over gNFS is significantly lower than write performance - #1561731: Rebalance failures on a dispersed volume with lookup-optimize enabled - #1562723: SHD is not healing entries in halo replication - #1565590: timer: Possible race condition between gf_timer_* routines
https://gluster.readthedocs.io/en/latest/release-notes/3.12.8/
2018-11-12T19:47:27
CC-MAIN-2018-47
1542039741087.23
[]
gluster.readthedocs.io
Plugin Name: NagiosOutput Specialized output plugin that listens for Nagios external command message types and delivers passive service check results to Nagios using either HTTP requests made to the Nagios cmd.cgi API or the use of the send_ncsa binary. The message payload must consist of a state followed by a colon and then the message e.g., “OK:Service is functioning properly”. The valid states are: OK|WARNING|CRITICAL|UNKNOWN. Nagios must be configured with a service name that matches the Heka plugin instance name and the hostname where the plugin is running. Config: An HTTP URL to the Nagios cmd.cgi. Defaults to. Username used to authenticate with the Nagios web interface. Defaults to empty string. Password used to authenticate with the Nagios web interface. Defaults to empty string. Specifies the amount of time, in seconds, to wait for a server’s response headers after fully writing the request. Defaults to 2. Must match Nagios service’s service_description attribute. Defaults to the name of the output. Must match the hostname of the server in nagios. Defaults to the Hostname attribute of the message. New in version 0.5. Use send_nsca program, as provided, rather than sending HTTP requests. Not supplying this value means HTTP will be used, and any other send_nsca_* settings will be ignored. New in version 0.5. Arguments to use with send_nsca, usually at least the nagios hostname, e.g. [“-H”, “nagios.somehost.com”]. Defaults to an empty list. New in version 0.5. Timeout for the send_nsca command, in seconds. Defaults to 5. New in version 0.5.. Example configuration to output alerts from SandboxFilter plugins: [NagiosOutput] url = "" username = "nagiosadmin" password = "nagiospw" message_matcher = "Type == 'heka.sandbox-output' && Fields[payload_type] == 'nagios-external-command' && Fields[payload_name] == 'PROCESS_SERVICE_CHECK_RESULT'" Example Lua code to generate a Nagios alert: inject_payload("nagios-external-command", "PROCESS_SERVICE_CHECK_RESULT", "OK:Alerts are working!")
https://hekad.readthedocs.io/en/v0.10.0/config/outputs/nagios.html
2018-11-12T19:56:53
CC-MAIN-2018-47
1542039741087.23
[]
hekad.readthedocs.io
Welcome to Cython’s Documentation¶ Also see the Cython project homepage. - Getting Started - Tutorials - Users Guide - - Implementing the buffer protocol - Using Parallelism - Debugging your Cython program - Cython for NumPy users - Indices and tables - Reference Guide
http://docs.cython.org/en/latest/
2016-08-31T14:11:40
CC-MAIN-2016-36
1471982290634.12
[]
docs.cython.org
For. Issues for Reflections and Refractions The maps used to create reflections or refractions, Flat Mirror, Raytrace, Reflect/Refract, and Thin Wall Refraction, are supported by the mental ray renderer. However, the mental ray renderer simply uses these maps as indications to use its own ray-tracing method, leading to some restrictions on which parameters are supported, as described in the sections “Materials” and “Maps,” below. When reflections and refractions are ray traced, applying Blur (or Distortion, in Flat Mirror) does not apply to reflections or refractions of environment maps. In general, Blur and Distortion render differently than they do with the default scanline renderer, and you might have to experiment with parameter values to get a comparable rendering result. The mental ray renderer does not support these materials: The mental ray renderer supports all Raytrace material settings except for the antialiasing parameters and the settings found under Rendering Raytracer Settings and Rendering Raytrace Global Include/Exclude. All these options are specific to the default scanline renderer. The mental ray renderer can't use the Progressive JPEG (.jpg) format as a bitmap. Also, Summed Area filtering is not supported (in the Filtering group of the Bitmap Parameters rollout). PSD files are supported, but are translated into binary data, and because of this, consume a lot of memory and increase render time. To reduce the time involved, convert the PSD file to a format such as BMP. The same is true of TIFF files. In addition, there are certain TIFF subformats that the mental ray renderer does not support; specifically, LZW, CCIT (fax), or JPEG compression; non-RGB color models such as CMYK, CIE, or YCbCr; or multiple images in the same file (in this case, only the first image is used). The mental ray renderer does support bilevel (1-bit), grayscale (4- or 8-bit), color map (4- or 8-bits), RGB(A) (8-, 16-, or 32-bit) TIF images, and TIF files with image strips. The mental ray renderer doesn't support this map. Flat Mirror is supported by the mental ray renderer, except for the First Frame Only and Every Nth Frame parameters. The mental ray renderer supports all Raytrace map settings except for the antialiasing parameters. This map tells the mental ray renderer to use ray-traced reflections and refractions. Most parameters are supported, but the parameters Blur Offset, First Frame Only, Every Nth Frame, and Atmosphere Ranges are not supported.
http://docs.autodesk.com/3DSMAX/15/ENU/3ds-Max-Help/files/GUID-0DA844D1-50DC-47FF-AD5A-C0C3FFF9A1A6.htm
2016-08-31T14:16:03
CC-MAIN-2016-36
1471982290634.12
[]
docs.autodesk.com
Developer Guide The Developer section explains the key concepts and features that are used to build InsightEdge and XAP applications. The topics in this section cover a wide range of information, such as XAP and InsightEdge APIs, data modeling, indexing, memory management, Space replication, web application support, and platform interoperability, along with in-depth explanations of the Space interface and the Processing Unit. Refer to the following topics and more: - The Space Interface - The Processing Unit - XAP Data Modeling - Platform Interoperability - Running Analytics
https://docs.gigaspaces.com/xap/14.0/dev-java/
2019-04-18T19:10:54
CC-MAIN-2019-18
1555578526228.27
[]
docs.gigaspaces.com
Account Management and Administration Account management and administration takes place in the Dashboard interface. From there, all users can manage their basic account information and access the Inbox interface, Activity interface, and Assignments interface. Administrators can perform more actions, like setting user permissions or managing admin-only system components, for example, branches, taxonomy, and publishing plugins. There was a problem loading this topic
https://docs.easydita.com/docs/user-guide/171/account-management-and-administration
2019-04-18T19:01:44
CC-MAIN-2019-18
1555578526228.27
[]
docs.easydita.com
STL_VACUUM Displays row and block statistics for tables that have been vacuumed. The table shows information specific to when each vacuum operation started and finished, and demonstrates the benefits of running the operation. For information about the requirements for running this command, see the VACUUM command description. This table is visible only to superusers. For more information, see Visibility of Data in System Tables and Views. Table Columns Sample Queries The following query reports vacuum statistics for table 108313. The table was vacuumed following a series of inserts and deletes. select xid, table_id, status, rows, sortedrows, blocks, eventtime from stl_vacuum where table_id=108313 order by eventtime; xid | table_id | status | rows | sortedrows | blocks | eventtime -------+----------+----------------------+------------+------------+--------+--------------------- 14294 | 108313 | Started | 1950266199 | 400043488 | 280887 | 2016-05-19 17:36:01 14294 | 108313 | Finished | 600099388 | 600099388 | 88978 | 2016-05-19 18:26:13 15126 | 108313 | Skipped(sorted>=95%) | 600099388 | 600099388 | 88978 | 2016-05-19 18:26:38 At the start of the VACUUM, the table contained 1,950,266,199 rows stored in 280,887 1 MB blocks. In the delete phase (transaction 14294) completed, vacuum reclaimed space for the deleted rows. The ROWS column shows a value of 400,043,488, and the BLOCKS column has dropped from 280,887 to 88,978. The vacuum reclaimed 191,909 blocks (191.9 GB) of disk space. In the sort phase (transaction 15126), the vacuum was able to skip the table because the rows were inserted in sort key order. The following example shows the statistics for a SORT ONLY vacuum on the SALES table (table 110116 in this example) after a large INSERT operation: vacuum sort only sales; select xid, table_id, status, rows, sortedrows, blocks, eventtime from stl_vacuum order by xid, table_id, eventtime; xid |table_id| status | rows |sortedrows|blocks| eventtime ----+--------+-----------------+-------+----------+------+-------------------- ... 2925| 110116 |Started Sort Only|1379648| 172456 | 132 | 2011-02-24 16:25:21... 2925| 110116 |Finished |1379648| 1379648 | 132 | 2011-02-24 16:26:28...
https://docs.aws.amazon.com/redshift/latest/dg/r_STL_VACUUM.html
2019-04-18T19:20:04
CC-MAIN-2019-18
1555578526228.27
[]
docs.aws.amazon.com
inSync Client 5.9.5 for inSync Cloud inSync Client is a lightweight application that manages data backup and allows collaboration with other users. It can enable users to manage their preferences such as folder selection and scheduling. - Quick Start Guides - This guide contains instructions on how you can use the inSync Client to backup data and restore data from your devices. - Install inSync Client - This section describes how to install, configure and start inSync Client. - Backup and Restore - This category provides you information on how to use, configure inSync Client for backup and restore. - Share and Sync - This section provides instructions on how to access and work with inSync Share.
https://docs.druva.com/005_inSync_Client/inSync_Client_5.9.5_for_inSync_Cloud
2019-04-18T18:54:01
CC-MAIN-2019-18
1555578526228.27
[]
docs.druva.com
An Act to amend 48.38 (5) (b), 48.38 (5) (bm) 1., 48.38 (5m) (b), 48.38 (5m) (c) 1., 48.62 (3), 48.625 (2m), 48.64 (1r) and 118.125 (4) of the statutes; Relating to: notice to a school of a permanency review or hearing, notice to a school district of a foster home or group home license or out-of-home care placement, and transfer of pupil records.
https://docs.legis.wisconsin.gov/2017/proposals/reg/sen/bill/sb655
2019-04-18T18:28:17
CC-MAIN-2019-18
1555578526228.27
[]
docs.legis.wisconsin.gov
What's new or changed in Dynamics 365 for Finance and Operations, Enterprise edition 7.3 This topic describes features that are either new or changed in Microsoft Dynamics 365 for Finance and Operations, Enterprise edition 7.3. This version was released in December 2017 and has a build number of 7.3.11971.56116. To learn more about the new features and changes in all of the latest product releases, see What's new or changed and What's new or changed in Dynamics 365 for Retail. Go to the Dynamics 365 Roadmap to find supplemental information about new features and learn more about what new features are in development. Demand forecast entries data entity enabled for Data management framework Data management framework (DMF) is now enabled for the Demand forecast entries data entity (ForecastDemandForecastEntryEntity), which makes it possible to integrate with third-party systems. In addition, each demand forecast entry is uniquely identified by a sequence number (entity field ForecastEntryNumber). For all integrations consider this unique identifier, even if it doesn't exist in the source or third-party system. Demo data in data packages Demo data has been delivered in prior releases as a database with a large number of companies. In addition to that database, we also create data packages using the demo data companies so that you can load your demo data on an empty environment using the data management framework. These data packages will be delivered, for a specific release, as assets in the global shared assets library in Lifecycle Services (LCS). Instructions for loading the data packages will also be provided. The data packages are similar but not identical to existing demo companies and may change over time. The packages are very small and provide a quick way to download the demo data and modify it before you import it into an environment. We will continue to add additional demo data for more companies and module functionality in the future. Dynamics 365 for Project Service Automation to Dynamics 365 for Finance and Operations integration – Phase 1 (private preview) The first phase of the integration from Dynamics 365 for Project Service Automation to Dynamics 365 for Finance and Operations is now available in private preview. The Project Service Automation to Finance and Operations integration solution uses Data Integration to synchronize data across Microsoft Dynamics 365 for Finance and Operations and Dynamics 365 for Project Service Automation instances via the Common Data Service (CDS). The integration templates available with the Data Integration feature enable the flow of projects, project contracts, and project contract lines from Project Service Automation to Finance and Operations. For more information about Common Data Service data integration, see Integrate data into Common Data Service for Apps in the PowerApps documentation. This solution provides direct synchronization in the following areas: - Maintain project contracts in Project Service Automation and sync them directly from Project Service Automation to Finance and Operations. - Create projects in Project Service Automation and sync them directly from Project Service Automation to Finance and Operations. - Maintain project contract lines in Project Service Automation and sync them directly from Project Service Automation to Finance and Operations. - Maintain project contract line milestones in Project Service Automation and sync them directly from Project Service Automation to Finance and Operations. To nominate your organization to participate in the private preview, fill out the survey at. Enhanced integration of Prospect to cash between Dynamics 365 for Sales and Dynamics 365 for Finance and Operations Enhancements to Prospect to cash integration between Dynamics 365 for Sales and Dynamics 365 for Finance and Operations, Enterprise edition 7.3 enable direct synchronization in the following processes: - Maintain accounts in Sales and sync them to Finance and Operations as customers. - Maintain contacts in Sales and sync them to Finance and Operations as either customers or contacts for a customer. - Maintain products in Finance and Operations and sync them to Sales. - Create quotes in Sales and sync them to Finance and Operations. - Generate sales orders in Sales for existing products and sync them to Finance and Operations. - Generate, modify, and fulfill sales orders in Finance and Operations and sync changes to Sales. - Generate invoices in Finance and Operations and sync them to Sales. Highlights of these integration enhancements include: - Support data entity overwrite of numbers for quotes, sales orders, and invoices (no need to set number sequence to manual). - Conversion of quote, sales, or invoice line discount from per unit to per line. - Support for sync of tax related charges from Finance and Operations to Sales, such as freight tax. - Additional functionality to support sales order sync from Sales to Finance and Operations. - Additional functionality to support sales order sync from Finance and Operations to Sales. - Sync support for country/region ISO codes on invoice address, exposed on data entities. More information Expense management mobile workspace enhancements This feature provides support for functionality that is available in expense management that was not available in the mobile solution for expense management. This list includes (but is not limited to): - Support for mileage expenses. - Support for intercompany expenses. - Support for per diem expenses. For more information, see Expense management mobile workspace. Financial reporting using Power BI A set of default reports built using a new visualization using Power BI is available for financial reporting. The new financial reporting experience will be embedded within Finance and Operations, giving you a seamless experience of report generation and allowing you to drill into supporting documents. Limited subledger data will be available to provide better ledger to subledger analysis. Default reports, such as a trial balance, balance sheet, and profit and loss, will be shipped out of the box, however, initially no edits will be allowed using Finance and Operations. Edits need to be made using the Power BI desktop. The existing financial reporting using Report Designer is still available and fully supported. To view additional information about Financial reporting using Power BI, watch the following video: Reporting for Dynamics 365 for Finance and Operations. Fixed asset roll forward report The new Fixed assets roll forward report provides you with the detailed fixed asset data needed for period closing, financial statements, and tax reporting in an easy-to-read Excel format. The report, which utilizes the GER framework, shows fixed asset financial details for a specific period. Comprehensive data includes individual asset starting and ending balances along with valuation movements for the period, in addition to any new asset acquisitions and disposals that occurred in the specified timeframe. Totals are provided for the fixed asset group and legal entity. Global coverage – Configurable Electronic Reporting Several new features have been added to the Electronic Reporting (ER) framework. - Improvements in supporting data import from incoming documents in XML format – You can now configure a single ER format to import data from incoming XML files with a different format. For example, you can have a different root element, or a root element predefined in the format, or any sequence of nested elements of a particular parent element. This allows customers to reduce the efforts needed to manage the ER solution and to easily adopt existing ER solutions to support new requirements. - Configurable import from incoming documents in CSV format – You can now use ER to configure the import of data from incoming documents in CSV format. Initially designed to support import from incoming files in only XML and TXT format, ER formats can be used for parsing incoming documents in another format – as plain text storing tabular data separated by a special character and embraced by quotation characters. This allows customers to easily adopt specific requirements that can be introduced for payment, settlement, and other processes. - Check prerequisites for the importing ER solution – This functionality allows you to check the compliance of the current application instance with the ER configuration that has been selected for import. All missing application updates (if any) will be indicated, as the list of required KB numbers that significantly reduce the efforts needed to install necessary prerequisites for making the application compatible with the selected for import ER configuration. - Re-usage of application logic by calling methods of application classes – The existing functionality to configure ER expressions for calling methods of application classes with arguments has been improved. With this new functionality, you will be able to configure expressions in which the values of such arguments can be defined dynamically at run-time by using ER data sources. This allows customers to significantly reduce the effort needed to configure ER solutions when the necessary logic is already available in the application's source code. - Improvements in calculation of aggregate functions and data grouping – For some ER data sources of the GROUP BY type, the data grouping and calculation of aggregate functions can be performed at the database (SQL) level. This allows customers to significantly improve the performance of ER reports especially for the transactional data sources that may contain many records. - Records deletion based on information from incoming documents – You can now configure ER formats for data import from incoming documents to only insert new records or update existing ones. You will be able also to configure the logic of existing records' deletion. This gives customers more opportunities to use ER framework for automation of various business processes as configurable ones. - Changes in APIs to access the ER framework – The existing APIs to access ER framework has been changed – most of X++ classes were moved to an external C# assembly while the rest of them were marked as internal. New APIs improves the ER framework's backward compatibility. This allows customers to significantly reduce the effort needed to manage their ER-related code modifications in the future updates of the application. For more information, see Electronic reporting overview. Inventory transactions logging (InventSumLogTTS) optimization Inventory transactions logging (InventSumLogTTS) is now turned on for all items and the old records in the log are automatically removed. These changes prevent records from being inserted into the Inventory transactions log table (InventSumLogTTS) in certain conditions. This prevents the table from getting too large and causing performance issues. When planning processes are enabled in a company, which means that Master planning > Setup > Master planning parameters > Disable all planning processes is set to No, inventory transactions logging (InventSumLogTTS) is not turned on in the following cases: - The warehouse on the transaction has the Manual replenishment setup. - The coverage code for the item and its product dimensions is Manual. - The transaction is related to Blocking inventory statuses. The item is disabled for planning via the Product lifecycle state parameter on the item master. When planning processes are disabled in a company, which means that Master planning > Setup > Master planning parameters > Disable all planning processes is set to Yes, inventory transactions logging is never turned on. The Inventory transactions log table will be periodically cleaned up to remove all records that are older than 90 days. The cleanup is triggered automatically by regenerating the master plan. India localization India localization is available with the following features: - Withholding tax, including TDS and TCS. - Fixed assets, including depreciation as per the Companies act, depreciation as per the Income tax act, and special depreciation. - Value-added tax (VAT). - Custom duty and India Goods and Services Tax (GST). For more information, see the "Tax Engine (GTE) – India GST only" section in this topic. Master and reference data entities for warehouse and transportation management Data entity support is now enabled for setup data in the supply chain management area. For the warehouse and transportation areas, enhancements have been made to the default configuration templates. The default configuration templates provide the entities and sequencing that are required to copy configuration data from one instance to another instance in a single step. For more information, see Set up a warehouse by using a warehouse configuration template. Notifications in Point of Sale In today's modern retail environment, the store associates are assigned various tasks such as helping customers, running transactions, performing stock counts, or receiving orders in the store. Point of Sale (POS) client empowers the associates to do these and much more, all in one application. With various tasks to be performed in a day, there is a need to notify the associates when a task requires their attention. To address this requirement, we have created a notification framework that utilizes the notification capability in POS. For this release, the notifications can be enabled for POS operations only, and out of the box, the notifications can only be enabled for the order fulfillment operation. However, the notification framework is extensible and enables developers to easily plug in custom code for any operation. We have also provided the ability to configure role-based notifications that empower the retailers to easily define who should receive what notifications. Optimization advisor The Optimization advisor workspace helps business users follow best practices to optimize the business processes that they own. It analyzes business processes, finds optimization opportunities, leverages application data to quantify the opportunities, and then recommends solutions that can help improve both the effectiveness towards the business goal and the performance of the application. With Optimization advisor, business users can: - Find proactive, quantified, actionable, and personalized optimization opportunities in one place. - Take recommended actions. - Analyze the impact of taking the recommended actions. With the current release, the advisor presents opportunities to optimize: - The performance of inventory closing. - The performance of wave processing and work creation within warehouse management. - The overall performance of the application by updating system configuration settings to reflect the actual business processes. - Master data quality across BOMs, routes, and inventory management. More information Partial release of materials and release materials per production operation Previously, when a production order was released, all the material for the order was released to the warehouse. In some production scenarios, this was not the optimal process. For example, consider the following scenarios: - Production orders where material is consumed on different operation. - Long running production orders where materials for the total quantity of the production order cannot be released at once; either because all the material is not available or there is not space enough on the shop floor. To better support these scenarios, the release process has been enhanced with the following capabilities: - Ability to limit the release of materials to an operation or selected operations. - Ability to release materials for a partial quantity of finished goods. In process industries, this can for example be useful for batch orders where not all the materials that are scheduled for the order can be consumed at once. For example, if you plan to produce 1000 pieces of the finished goods per day or shift, then you can easily limit the release of materials to match the requirement for this quantity. Payment SDK and Retail POS handlers Payment SDK – We added the support for sending custom error messages from the payment SDK to POS. Previously POS transformed all the messages to standard error message defined in the POS. Now you can send custom messages and error code to easily troubleshoot payment errors. POS view extension – Support has been added for extending the POS Picking and receiving screen to add custom columns. POS APIs – New deposit override APIs have been added to support overriding the deposit through code for scenarios, such as layaway. POS overridable request handlers – We added a new handler in POS for extensions to override the logic and automate serial number entry. If an extension requires the modification of serial number entry in POS, you can override the new Get serial number handler. For example, in POS, a dialog box captures the serial number but you can automate the flow by calling an external webservice or autogenerated serial number and then override this request. Sample extensions – In addition to the existing samples, we added the following samples to the Retail SDK to help with extensions. - Retail proxy extension for store hours and cross loyalty samples. - Custom column sample for the Picking and receiving screen. - POS serial number automation sample - Additional POS API samples. Product configuration enhancements Arrange attributes in one or multiple columns – To optimize configuration experience, you can arrange attributes in one or more columns. The maximum number of columns is 10, and the default number is 1. This makes it easier to model configuration experiences with many attributes in one configuration step. You must first group attributes before you set up the field. If some attributes are not added to groups, they will be shown in the leftmost column. If all attributes are assigned to attribute groups, the groups will be shown in the columns. Enable Z3 Solver strategy for product configuration – The Z3 Solver strategy is available for product configuration. In many benchmark tests, the solver has shown a significant performance improvement compared to the Microsoft solver foundation (MSF) solver. The new Z3 Solver strategy will be assigned per configuration model. Note that the other three solver strategies are all strategies of MSF: Default, Top-down, and Minimal domains first. For more information, see Solver strategy for product configuration. Import, export, and migrate the product configuration model – You can export and import the product configuration model and model versions using standard data entities. The configuration model is exported as an XML structure and saved in a single entity, which makes it simple to export and import again. Product lifecycle state The product lifecycle state is now available for released products and product variants. You can define any number of product lifecycle states by assigning a state name and description. You can select one lifecycle state as the default state for new released products. Released product variants inherit the product lifecycle state from their released product masters. When changing the lifecycle state on a released product master, you can choose to update all existing variants that have the same original state. To control and understand the situation of a specific product or product variant in its lifecycle, it is a best practice in Product lifecycle management solutions (PLM) to associate a lifecycle state with a variable state model to products. This capability has been added to the released product model. The main purpose of this extension is to provide a scalable solution that can exclude obsolete products and product variants, including configurations, from master planning and BOM-level calculation. Impact on master planning – The product lifecycle state has only one control flag: Is active for planning. By default, this is set to Yes for all product lifecycle states. When the field is set to No, the associated released products or product variants are: - Excluded from Master planning - Excluded from BOM level calculation For performance reasons, it is highly recommended that you associate all obsolete released products or product variants to a product lifecycle state that is deactivated for master planning, especially when you work with non-reusable product configuration variants. Find obsolete released products and products variants – You can run an analysis to find and update obsolete released products or product variants. If you run the analysis in a simulation mode, the released products and product variants that are identified as obsolete will be displayed on a specific page for you to view. The analysis searches for transactions and specific master data to find the released products or product variants that have no demand within a specific period. New released products that are created within the specific period can be excluded from the analysis. When the analysis simulation returns the expected result, you can run the analysis by assigning a new product lifecycle state to all the products that are identified as obsolete. Default value during migration, import, and export - When migrating from previous releases, the lifecycle state for all released products and product variants will be blank. - When importing released products through a data entity, the default lifecycle state will be applied. - When importing released product variants through a data entity, the product lifecycle state of the released product master will be applied. Note The ability to set individual product lifecycle states using the data entities for released products or product variants is not supported. For more information, see Product lifecycle state. Retail proxy – New extension point added to support retail proxy extension without inline changes Previously you needed to modify the retail proxy project inline to generate the Retail proxy to support your new CRT/RS extension in POS offline mode or e-Commerce extensions. Now, you can generate proxy without any inline changes as a completely new extension. We also added support for multiple ISV/Partner extension proxies without any code merge between the extension proxies. This will help you with a seamless upgrade for proxy extensions. For more information, see Retail Typescript and C# proxies. Safety stock replenishment enhancements The constraint that, at any given time, the available inventory for an item must be above the safety stock quantity can introduce delays in shipping sales orders, production orders, and any other real, independent, or dependent demand. For example, for an item with 5 days lead time, if the item has 10 pieces in inventory, and a safety stock level is set to 10, a sales line will be delayed by 5 days, waiting for the delivery of the planned order for the item. The constraint that the available inventory must always be above the safety stock quantity is deprioritized if the system determines that this causes delays in the fulfilment of real demand: sales lines, BOM lines, transfer requirements, demand forecast lines, and so on. Otherwise, making sure that the available inventory is above the safety stock quantity has the same priority as any other demand types. This ensures no delays for real transactions and helps to prevent over-replenishment and early-replenishment of safety stock. During the coverage phase of master planning, safety stock replenishment is no longer deprioritized. Now on-hand inventory can be used before any other demand types. During the delay calculation, new logic will be added to go over the delayed sales lines, BOM line requirements, and all the other demand types, to determine whether they could be delivered on time, provided safety stock is used. If the system identifies that it can minimize delays by using safety stock, then the sales lines or BOM lines will replace their initial coverage with the safety stock, and the system will trigger the replenishment for the safety stock instead. If the plan or the item is not set up for delayed calculation, then the safety stock constraint will have the same priority as any other demand types. This means there be a reserve of on-hand and other available inventory before any other demand types. Coverage calculation for the items that expire also has new logic. At any point in time, the inventory receipt with the latest expiry date will be used for safety stock to allow real demand, such as sale lines or BOM lines, to be fulfilled in the FEFO (First Expired, First Out) order. For more information, see Safety stock fulfillment for items. Tax Engine (GTE) – India GST only The Tax Engine (GTE) is an essential part of the configurable business application experience in Finance and Operations. It's highly customizable and lets a business user, functional consultant, or power user configure tax rules that determine tax applicability, calculation, posting, and settlement, based on legal and business requirements. Tax configuration is more flexible with GTE. It provides an easier extension experience; almost no code change is required for the data provider in the AOT to support extension scenarios. More information Vendor collaboration Vendor. Warehouse – Fulfillment policy added to full or partial batch release of transfer orders The release to warehouse batch job is enriched by a fulfilment policy that is equivalent to the functionality for sales order, but now also includes transfer orders. You can batch release transfer orders according to a fulfilment rate policy. The policy can also be applied for the Manual release to warehouse form, as well as from the load release. It is also possible to override the default policy at a line level when releasing manually. Warehouse – Tare weight used in containerization is included in the load's gross weight This enhancement ensures that the tare weight of a container used in the containerization is included in the gross weight of the load. Some usability enhancements of the load header have also been added to better relate to the applied load template. The volume of the container is now represented with greater detail, including gross volume, remaining volume, and the ability to capture actual volume. These updated values are consumed by the rate route engines, as well as by the transportation tender. Feedback Send feedback about:
https://docs.microsoft.com/en-us/dynamics365/unified-operations/fin-and-ops/get-started/whats-new-application-7.3-update
2019-04-18T18:32:42
CC-MAIN-2019-18
1555578526228.27
[]
docs.microsoft.com
Grant permissions on a data source object (Analysis Services) APPLIES TO: SQL Server Analysis Services Azure Analysis Services Typically, most users of Analysis Services do not require access to the data sources that underlie an Analysis Services project. Users ordinarily just query the data within an Analysis Services database. Server Administrators or Database Administrators have access to data source objects. This means that a user cannot access a data source object unless an administrator grants permissions. Important For security reasons, the submission of DMX queries by using an open connection string in the OPENROWSET clause is disabled. Set Read permissions to a data source A database role can be granted either no access permissions on a data source object or read permissions. In SQL Server Management Studio, connect to the instance of Analysis Services, expand Roles for the appropriate database in Object explorer, and then click a database role (or create a new database role). In the Data Source Access pane, locate the data source object in the Data Source list, and then select the Read in the Access list for the data source. If this option is unavailable, check the General pane to see if Full Control is selected. Full Control is already providing permission, you cannot override permissions on the data source. Working With the Connection String Used by a Data Source Object a remote computer, the two computers must be trusted for impersonation by using Kerberos authentication, or the query will typically fail. See Configure Analysis Services for Kerberos constrained delegation for more information. If the client does not allow for impersonation (through the Impersonation Level property in OLE DB and other client components), Analysis Services will try to make an anonymous connection to the underlying data source. Anonymous connections to remote data sources rarely succeed, as most data sources do not accept anonymous connections). See Also Data Sources in Multidimensional Models Connection String Properties (Analysis Services) Authentication methodologies supported by Analysis Services Grant custom access to dimension data (Analysis Services) Grant cube or model permissions (Analysis Services) Grant custom access to cell data (Analysis Services)
https://docs.microsoft.com/en-us/sql/analysis-services/multidimensional-models/grant-permissions-on-a-data-source-object-analysis-services?view=sql-server-2017
2019-04-18T18:28:34
CC-MAIN-2019-18
1555578526228.27
[]
docs.microsoft.com
Reader In This Article Using PSR-7 Clients As noted in the previous section, you can substitute your own HTTP client by implementing the ClientInterface. In this section, we'll demonstrate doing so in order to use a client that is PSR-7-capable. Responses zend-feed provides a facility to assist with generating a Zend\Feed\Reader\Response from a PSR-7 ResponseInterface via Zend\Feed\Reader\Http\Psr7ResponseDecorator. As such, if you have a PSR-7-capable client, you can pass the response to this decorator, and immediately return it from your custom client: return new Psr7ResponseDecorator($psr7Response); We'll do this with our PSR-7 client. Guzzle Guzzle is arguably the most popular HTTP client library for PHP, and fully supports PSR-7 since version 5. Let's install it: $ composer require guzzlehttp/guzzle We'll use the GuzzleHttp\Client to make our requests to feeds. Creating a client From here, we'll create our client. To do this, we'll create a class that: - implements Zend\Feed\Reader\Http\ClientInterface - accepts a GuzzleHttp\ClientInterfaceto its constructor - uses the Guzzle client to make the request - returns a zend-feed response decorating the actual PSR-7 response The code looks like this: use GuzzleHttp\Client; use GuzzleHttp\ClientInterface as GuzzleClientInterface; use Zend\Feed\Reader\Http\ClientInterface as FeedReaderHttpClientInterface; use Zend\Feed\Reader\Http\Psr7ResponseDecorator; class GuzzleClient implements FeedReaderHttpClientInterface { /** * @var GuzzleClientInterface */ private $client; /** * @param GuzzleClientInterface|null $client */ public function __construct(GuzzleClientInterface $client = null) { $this->client = $client ?: new Client(); } /** * {@inheritdoc} */ public function get($uri) { return new Psr7ResponseDecorator( $this->client->request('GET', $uri) ); } } Using the client In order to use our new client, we need to tell Zend\Feed\Reader\Reader about it: Zend\Feed\Reader\Reader::setHttpClient(new GuzzleClient()); From this point forward, this custom client will be used to retrieve feeds. References This chapter is based on a blog post by Stefan Gehrig. Found a mistake or want to contribute to the documentation? Edit this page on GitHub!
https://docs.zendframework.com/zend-feed/psr7-clients/
2019-04-18T19:03:43
CC-MAIN-2019-18
1555578526228.27
[]
docs.zendframework.com
物体数据¶ Motion Blur¶ Reference - Panel}\). Cycles Settings¶ Reference - Panel duplicators,. Shadow Catcher¶ Enables the object to only receive shadow rays. It is to be noted that shadow catcher objects will interact with other CG objects via indirect light interaction. This feature makes it really easy to combine CGI elements into a real-world footage. Performance.
https://docs.blender.org/manual/zh-hans/dev/render/cycles/settings/objects/object_data.html
2019-04-18T18:42:15
CC-MAIN-2019-18
1555578526228.27
[]
docs.blender.org
Within Datadog, a graph can only contain a set number of points and, as the timeframe over which a metric is viewed increases, aggregation between points occurs. Datadog rolls up data points automatically, based on the in-app metric type: gauge metrics are averaged, whereas count and rate metrics are summed. If you wanted to aggregate the sum of the metric over a one: And here is the same metric, graphed using a day-long rollup with .rollup(86400): See here for more detailed information about the .rollup() function.
https://docs.datadoghq.com/graphing/faq/why-does-zooming-out-a-timeframe-also-smooth-out-my-graphs/
2019-04-18T18:49:10
CC-MAIN-2019-18
1555578526228.27
[]
docs.datadoghq.com
To connect your GetResponse account to vooPlayer it is necessary to find your Get Response API key and copy it into the corresponding field in vooPlayer. - Get Response API key is located on the API tab, under the account settings=> Integrations & API - Copy the key - In vooPlayer, go to Integrations & APIs and select GetResponse - Paste your API key and click on the button "CREATE INTEGRATION" The next step is to attach your GetResponse list with the Email Capture Gate. Click here to open this GIF in a new tab in order to view it in the higher resolution.
https://docs.vooplayer.com/integrations/getresponse-integration
2019-04-18T18:44:46
CC-MAIN-2019-18
1555578526228.27
[array(['https://downloads.intercomcdn.com/i/o/62389577/68a0e21793a26ea60e36e6d5/GetResponse.gif', None], dtype=object) ]
docs.vooplayer.com
An Act to create 893.16 (5) (d), 893.575 and 895.438 of the statutes; Relating to: creating a civil cause of action for victims of commercial sexual exploitation. Amendment Histories Bill Text (PDF: ) SB552 ROCP for Committee on Transportation, Public Safety, and Veterans and Military Affairs (PDF: ) LC Bill Hearing Materials Wisconsin Ethics Commission information 2013 Assembly Bill 681 - Rules
https://docs.legis.wisconsin.gov/2013/proposals/sb552
2019-04-18T18:50:20
CC-MAIN-2019-18
1555578526228.27
[]
docs.legis.wisconsin.gov
Here's an easy step by step guide to create your Amazon SMTP Credentials and integrate them into your vooPlayer account. First you'll need to login to your Amazon Service 1. Find Domain settings and verify your domain as well as you email address - Here is a needed action you'll have to take on your domain in order for it to be verified. Set your Email here - Go to SMTP Settings to create new ones - Click to create your user - Copy create SMTP credentials - Now all you have to do is go into your vooPlayer account, and under SMTP settings paste your credentials -
https://docs.vooplayer.com/integrations/amazon-smtp-setup
2019-04-18T18:50:58
CC-MAIN-2019-18
1555578526228.27
[array(['https://downloads.intercomcdn.com/i/o/44925521/a75c034dedfd4edfa0c4ec26/file-9Ia68PF9nG.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/44925527/07b44e2a8bf04748d3e43018/file-A7HqTvVXgh.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/44925530/6cb2fc0e103bbf03d29fd351/file-9dux9c10SJ.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/44925531/c9201e09b949616f5b67f269/file-OV5U6FDZaX.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/44925533/a3de0225519c6edf873608b4/file-c153dOyvo4.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/44925535/3da290b034229bb499f9f5f9/file-mN8PmeapqY.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/44925538/745cec454a3f442f81ccddfa/file-d1R7xGwbaq.png', None], dtype=object) ]
docs.vooplayer.com
When watching a streaming video on network media player, there is nothing more annoying than constant stopping and starting and. This feature is activated by default, you can find it under the ADVANCED OPTIONS. This setting will allow your video to always pre-buffer before starting. This can be useful in many situations, however, you should mostly avoid buffering during the video if most of your viewers have low internet speed.
https://docs.vooplayer.com/player-controls/preload-buffer-video
2019-04-18T19:12:46
CC-MAIN-2019-18
1555578526228.27
[array(['https://downloads.intercomcdn.com/i/o/49556726/242a377762f1f9637026261a/Screenshot_258.png', None], dtype=object) ]
docs.vooplayer.com
Static website hosting in Azure Storage (Preview) Azure Storage now offers static website hosting (Preview), enabling you to deploy cost-effective and scalable does it work?. Content for your static website is hosted in a special container named "$web". As a part of the enablement process, "$web" is created for you if it does not already exist. Content in "$web" can be accessed at the account root using the web endpoint. For example returns the index document you configured for your website, if a document of that name exists in the root directory of $web. When uploading content to your website, use the blob storage endpoint. To upload a blob named 'image.jpg' that can be accessed at the account root use the following URL. The uploaded image can be viewed in a web browser at the corresponding web endpoint. Custom domain names You can use a custom domain to host your web content. To do so, follow the guidance in Configure a custom domain name for your Azure Storage account. To access your website hosted at a custom domain name over HTTPS, see Using the Azure CDN to access blobs with custom domains over HTTPS. Point your CDN to the web endpoint as opposed to the blob endpoint and remember that CDN configuration doesn't happen instantaneously, so you may need to wait a few minutes before your content is visible. Pricing and billing Static website hosting is provided at no additional cost. For more details on prices for Azure Blob Storage, check out the Azure Blob Storage Pricing Page. Quickstart Azure portal If you haven't already, create a GPv2 storage account To start hosting your web application, you can configure the feature using the Azure Portal and click on "Static website (preview)" under "Settings" in the left navigation bar. Click "Enabled" and enter the name of the index document and (optionally) the custom error document path. Upload your web assets to the "$web" container that was created as a part of static website enablement. You can do this directly in Azure Portal, or you can take advantage of Azure Storage Explorer to upload entire directory structures. Make sure to include an index document with the name you configured. In this example, the document's name is "index.html". Note The document name is case sensitive and therefore needs to match the name of the file in storage exactly. Finally, navigate to your web endpoint to test your website. Azure CLI Install the storage preview extension: az extension add --name storage-preview Enable the feature: az storage blob service-properties update --account-name <account-name> --static-website --404-document <error-doc-name> --index-document <index-doc-name> Query for the web endpoint URL: az storage account show -n <account-name> -g <resource-group> --query "primaryEndpoints.web" --output tsv Upload objects to the $web container: az storage blob upload-batch -s deploy -d $web --account-name <account-name> Is static websites.
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-static-website
2018-09-18T17:31:10
CC-MAIN-2018-39
1537267155634.45
[array(['media/storage-blob-static-website/storage-blob-static-website-portal-config.png', None], dtype=object) ]
docs.microsoft.com