content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Deployment scenarios You can deploy Streaming Analytics on Private Cloud Base depending on the application you want to build. - DataStream application using only Flink. In this case, you need to create a Flink application cluster. - SQL Streaming application using Flink with SQL Stream Builder. In this case, you need to create a Streaming SQL cluster. You can use the following workflow to have an understanding of the deployment process:
https://docs.cloudera.com/csa/1.3.0/deployment/topics/csa-deployment-scenario.html
2021-04-10T19:50:35
CC-MAIN-2021-17
1618038057476.6
[array(['../images/csa-deployment.png', None], dtype=object)]
docs.cloudera.com
Changing SELinux states and modes Permanent changes in SELinux states and modes As discussed in Introduction to SELinux, Enabling SELinux When enabled, SELinux can run in one of two modes: enforcing or permissive. The following sections show how to permanently change into these modes. While enabling SELinux on systems that previously had it disabled, to avoid problems, such as systems unable to boot or process failures, follow this procedure. The selinux-policy-targeted, selinux-policy, libselinux-utils, and grubbypackages are installed. To check that a particular package is installed: $ rpm -q package_name If your system has SELinux disabled at the kernel level (this is the recommended way, see Disabling SELinux), change this first. Check if you have the selinux=0option in your kernel command line: $ cat /proc/cmdline BOOT_IMAGE=... ... selinux=0 Remove the selinux=0option from the bootloader configuration using grubby: $ sudo grubby --update-kernel ALL --remove-args selinux The change applies after you restart the system in one of the following steps. Ensure the file system is relabeled on the next boot: $ sudo fixfiles onboot Enable SELinux in permissive mode. For more information, see Changing to permissive mode. Restart your system: $ reboot Check for SELinux denial messages. $ sudo ausearch -m AVC,USER_AVC,SELINUX_ERR,USER_SELINUX_ERR -ts recent If there are no denials, switch to enforcing mode. For more information, see selinux-changing-to-enforcing-mode. To run custom applications with SELinux in enforcing mode, choose one of the following scenarios: Run your application in the unconfined_service_tdomain. Write a new policy for your application. See the Writing a custom SELinux policy chapter in the RHEL 8 Using SELinux document for more information. Changing to permissive mode. To permanently change mode to permissive: Changing to enforcing mode When SELinux is running in enforcing mode, it enforces the SELinux policy and denies access based on SELinux policy rules. In Fedora, enforcing mode is enabled by default when the system was initially installed with SELinux. Check the current SELinux mode by using the getenforcecommand: $ getenforce Permissive If the command displays Disabled, then follow Enabling SELinux. If it displays Permissive, use the following steps to change mode to enforcing again: Restart the system: $ reboot On the next boot, SELinux relabels all files and directories in the system and adds the SELinux context for files and directories that were created when SELinux was disabled. Disabling SELinux When SELinux is disabled, SELinux policy is not loaded at all; it is not enforced and AVC messages are not logged. Therefore, all benefits of running SELinux listed in Benefits of SELinux are lost. The grubbypackage is installed: $ rpm -q grubby grubby-version To permanently disable SELinux: Configure your bootloader to add selinux=0to the kernel command line: $ sudo grubby --update-kernel ALL --args selinux=0 Restart your system: $ reboot After reboot, confirm that the getenforcecommand returns Disabled: $ getenforce Disabled Changing SELinux Modes at Boot Time On boot, you can set several kernel parameters to change the way SELinux runs: - enforcing=0 Setting this parameter causes the machine to boot is reported. However, in enforcing mode you might get a denial. - autorelabel=1 This parameter forces the system to relabel similarly to the following commands: ~]# touch /.autorelabel ~]# reboot If the system labeling contains a large amount of errors, you might need to boot in permissive mode in order that the autorelabel succeeds. For additional SELinux-related kernel boot parameters, such as checkreqprot, see the kernel-parameters.txt file. This file is available in the source package of your Linux kernel (.src.rpm). To download the source package containing the currently used kernel: ~]# dnf download --source kernel
https://docs.fedoraproject.org/ne/quick-docs/changing-selinux-states-and-modes/
2021-04-10T20:28:58
CC-MAIN-2021-17
1618038057476.6
[]
docs.fedoraproject.org
2.15 Release date: September 26th, 2019 Desktop Recorder A new recorder has been released for ATS. For details on this release and how it affects you, see Which Recorder Should I Use?. Improvements When deleting an action that is being used somewhere else, a warning is now shown in the Confirm Delete pop-up window.
https://docs.mendix.com/releasenotes/add-ons/ats-2.15
2021-04-10T18:20:25
CC-MAIN-2021-17
1618038057476.6
[]
docs.mendix.com
Power¶ The power of the ESP32-S2-HMI-DevKit-1 development board is divided into a 5 V power domain and a 3.3 V power domain, so as to reduce power consumption, improve power efficiency and support battery charging. Part of the power domain can be controlled by software whereas the other part is configured as permanently enabled in hardware design. To reduce current consumption, the preloaded firmware will power off all controlled power domains and put all ICs to low-power mode. 3.3 V Power Domain¶ Most of the ICs and modules are powered by the 3.3 V power domain, which can be divided into an uncontrolled 3.3 V power domain and a controlled 3.3 V power domain. The uncontrolled 3.3 V power domain cannot be powered off via software, and provides power for the Buck circuit. When there is a power supply from USB, this power domain will obtain power from the 5 V input through the USB cable; when USB is disconnected, it will obtain 3.6 ~ 4.2 V power from the lithium battery. This power domain mainly provides power for the ESP32-S2-WROVER module and other devices which can enter low-power mode via software control. The controlled 3.3 V power domain comes from the uncontrolled 3.3 V power domain and is turned on/off via a PMOS control switch, which is connected to the P4 pin of the IO expander. This power domain mainly provides power for ICs with higher static power consumption and cannot enter low-power mode. 5 V Power Domain¶ The 5 V power domain of the development board provides power for the audio amplifier and the TWAI® transceiver. It obtains power from the following resources: The USB port The power input from external 5 V power port The power passing through the Booster circuit from the lithium battery The power obtained from USB and the external 5 V power input supplies power for all devices (except CP2102N) that require 5 V power and cannot be disconnected by software. When obtaining power from the lithium battery, the EN pin level of the Booster IC can be controlled via the P5 pin of the IO expander to enable 5 V power in high level. The power input through the USB port on the bottom of the board is split into two lines: one provides power for CP2102N while the other becomes USB_5V after passing through a diode. The CP2102N will only be powered up when this USB port is connected, since it only needs to be in operation when the PC is connected. Any 5 V power input will cause the Booster IC to be powered off and charge the on-board lithium battery via the charging IC. Power Dependencies¶ The following functions depend on the 5 V power domain: TWAI® (selects available power supply from USB 5 V or Booster 5 V automatically) Audio amplifier (gets power supply from USB 5 V, if it fails, will try from the battery) 5 V power output connector (the same as TWAI®) The following functions depend on the controlled 3.3 V power domain: Micro-SD card Microphone and its bias circuit, and operational amplifier Display and touch function WS2812 RGB LED and IR LED IR LED Power State¶ When the development board is connected via the USB cable, the 5 V power domain is powered on automatically and the charging IC outputs voltage to supply power for the battery. In this case, the controlled 3.3 V power domain is controlled by the P4 pin of the IO expander. When the development board is powered by the battery, the controlled 3.3 V power domain is controlled by the P4 pin of the IO expander while the 5 V power domain is controlled by the P5 pin of the IO expander, and the charging IC will not work.
https://docs.espressif.com/projects/espressif-esp-dev-kits/en/latest/esp32s2/esp32-s2-hmi-devkit-1/reference/power.html
2022-08-08T03:47:02
CC-MAIN-2022-33
1659882570765.6
[]
docs.espressif.com
Use this option if you have a properly-formatted .dat file containing the templates. You can generate the file by exporting the templates from either the server you are currently accessing or from another server. A message appears, informing you if the import was successful. If a template to be imported already exists, it will be skipped.
https://docs.trendmicro.com/en-us/enterprise/trend-micro-apex-central-2019-online-help/policies/policy-resources_001/data-loss-prevention_002/data-loss-prevention_003/customized-dlp-templ/importing-templates.aspx
2022-08-08T03:35:33
CC-MAIN-2022-33
1659882570765.6
[]
docs.trendmicro.com
# How to set up Elementor in a Ymir project Automatic configuration You can have Ymir configure your project automatically for you by using the configure command. Elementor (opens new window) is a popular free WordPress page builder that you can use to build beautiful WordPress sites. Ymir makes it easy to support Elementor in your serverless WordPress project. This guide will cover the changes that you need to make. # Project configuration changes Below is a sample environment configuration for Elementor. You need to replace the environment with the correct environment name. environments: environment: cdn: excluded_paths: - /uploads/elementor/* ← Beaver Builder Oxygen →
https://docs.ymirapp.com/compatibility/elementor.html
2022-08-08T04:59:46
CC-MAIN-2022-33
1659882570765.6
[]
docs.ymirapp.com
Setting Up Invoice Copies COUNTERPOINT™ Provides a feature that allows you to set up the capture of a customers invoices for reprinting on Plain Paper at Month end. If there are customers set up with this feature, at the end of the statement run you will be asked to put plain paper on the printer and all invoices created during the statement period will be reprinted on Plain paper. This feature is turned on using the following procedure. Bring up the customer A/R record in change mode using the following menu path - Accounts Receivable…->1. Data Maintenance…->1.Customers This will bring up the following screen. The field that controls this feature is Field 56. As the prompt shows - entering a 1 in this field will cause the system to capture an image of all invoices done during the statement period for reprinting on plain paper after the completion of the statement printouts. by entering a 2 in this field you can also cause the printer to produce 2 copies of EVERY invoice generated for this customer whenever they make a purchase. If you wish to set up the multiple copies at time of purchase you must also set one of the fields in 36 to “M”. Once set up WITH A “1” in Field 56 this customers invoices will be captured from this point forward for Month end plain paper reprinting.
https://docs.amscomp.com/books/counterpoint/page/setting-up-invoice-copies
2022-08-08T05:00:03
CC-MAIN-2022-33
1659882570765.6
[]
docs.amscomp.com
User Guide¶ This section of our documentation is dedicated to show you the way around pretix if you are an event organizer wanting to use pretix to sell tickets. - Organizer accounts and teams - Creating an event - Configuring an event - Product structure guide - Terminology - Use case: Multiple price levels - Use case: Early-bird tiers based on dates - Use case: Early-bird tiers based on ticket numbers - Use case: Up-selling of ticket extras - Use case: Conference with workshops - Use case: Discounted packages - Use case: Group discounts - Use case: Restricted audience - Use case: Time slots - Use case: Season tickets - Use case: Mixed taxation - Embeddable Widget - Gift cards - FAQ and Troubleshooting - Markdown Guide - Glossary
https://docs.pretix.eu/en/latest/user/index.html
2022-08-08T05:00:56
CC-MAIN-2022-33
1659882570765.6
[]
docs.pretix.eu
Interface AuthenticationTrustResolver - All Known Implementing Classes: AuthenticationTrustResolverImpl public interface AuthenticationTrustResolverEvaluates Authenticationtokens Method Detail isAnonymous boolean isAnonymous(Authentication authentication)Indicates whether the passed). - Parameters: authentication- to test (may be nullin which case the method will always return false) - Returns: truethe passed authentication token represented an anonymous principal, falseotherwise isRememberMe boolean isRememberMe(Authentication authentication)Indicates whether the passed. - Parameters: authentication- to test (may be nullin which case the method will always return false) - Returns: truethe passed authentication token represented a principal authenticated using a remember-me token, falseotherwise
https://docs.spring.io/spring-security/site/docs/current/api/org/springframework/security/authentication/AuthenticationTrustResolver.html
2022-08-08T05:27:13
CC-MAIN-2022-33
1659882570765.6
[]
docs.spring.io
(. No devuelve ningún valor.. The sequence of the current instance will be rotated. Ejemplo #1 Ds\Sequence::rotate() example <?php $sequence = new \Ds\Vector(["a", "b", "c", "d"]); $sequence->rotate(1); // "a" is shifted, then pushed. print_r($sequence); $sequence->rotate(2); // "b" and "c" are both shifted, the pushed. print_r($sequence); ?> El resultado del ejemplo sería algo similar a: ( [0] => b [1] => c [2] => d [3] => a ) Ds\Vector Object ( [0] => d [1] => a [2] => b [3] => c )
http://docs.php.net/manual/es/ds-sequence.rotate.php
2022-08-08T04:17:25
CC-MAIN-2022-33
1659882570765.6
[]
docs.php.net
. Fast boot from Deep Sleep¶ The bootloader has the CONFIG_BOOTLOADER_SKIP_VALIDATE_IN_DEEP_SLEEP option which allows to reduce the wake-up time (useful to reduce consumption). This option is available when the..
https://docs.espressif.com/projects/esp-idf/en/v4.2.3/esp32/api-guides/bootloader.html
2022-08-08T03:48:22
CC-MAIN-2022-33
1659882570765.6
[]
docs.espressif.com
Introduction Proofable is a general purpose proving framework for certifying digital assets to public blockchains. Overall, it consists: CLI ( proofable-cli): the command-line interface (CLI) for API Service ( proofable-api). At the moment, it supports proving a file-system to a blockchain API Service ( proofable-api): the general purpose proving service that is fast and effective. It provides a set of APIs to manipulate trie structures and generate blockchain proofs for any digital asset. A trie is a dictionary of ordered key-values that can be built incrementally, whose root hash at any given time can be derived efficiently. Once the root hash is proven to a blockchain, every key-value is proven, so as the digital asset stored in that key-value Anchor Service ( provendb-anchor): the service continuously anchors hashes to blockchains, which is similar to what Chainpoint does, but with much better performance and flexibility. It supports multiple anchor types and proof formats. Digital signing can be also done at the Merkle root level. It is consumed by proofable-api, which is not directly public-accessible at the moment Links:
https://docs.proofable.io/
2022-08-08T05:30:56
CC-MAIN-2022-33
1659882570765.6
[]
docs.proofable.io
QTM Connect Live Link lets you real-time stream skeletons, 3D marker data and rigid bodies (6DOF) from a Qualisys motion capture system to Unreal Engine Add 3D marker positions to your Actor components, or let rigid body position and orientation manipulate them. Mapping skeletons to a mesh is handled by Unreal Engine’s Live Link plugin (a common interface for streaming and consuming animation data from external sources). QTM Connect for Unreal is open source and built upon our open source C++ SDK. QTM Connect for Unreal is available on Github. Features: * Streaming of realtime motion capture data for skeletons, 3D markers and rigid bodies (6DOF) from Qualisys Track Manager (QTM) to Unreal Engine via Live Link * Streaming of recorded motion capture data for skeletons, 3D markers and rigid bodies (6DOF) from Qualisys Track Manager (QTM) to Unreal Engine via Link Link * Skeleton retargeting of streamed motion capture data using retarget asset Code Modules: * QTMConnectLiveLink (Runtime) * QTMConnectLiveLinkEditor (Editor) Number of Blueprints: 0 Number of C++ Classes: 8 Network Replicated: No Supported Development Platforms: Win64 Supported Target Build Platforms: Win64 Documentation: Source Code and instructions on GitHub, Instruction videos on YouTube Example Project: Source Code and Example Project on GitHub Important/Additional Notes: Requires Qualisys Track Manger (QTM) for streaming of data Unreal 5 Early Access: A prebuilt version of Qualisys QTM Connect Live Link for UE5.0EA is available on under releases on the Qualisys GitHub
https://docs.unrealengine.com/marketplace/zh-CN/product/qualisys-qtm-connect-live-link?lang=zh-CN
2022-08-08T04:29:30
CC-MAIN-2022-33
1659882570765.6
[]
docs.unrealengine.com
Uncoded Medications. There is a report available from the Reports sidemenu tab. Found in the Medications/Allergies/Scripts bucket of reports, there is a report named Uncoded Meds. This report can be filtered by a start date and end date to limit results. The report will result a list of uncoded prescriptions (free-typed medications a user prescribed instead of using the FDB autocomplete list of choices of coded medications available to prescribe) created within a specified date range. The practice should review this report periodically and address issues found. This report could help be more proactive at identifying and investigating these issues and working on training prescribers on the correct medications to choose.
https://docs.webchartnow.com/functions/reports/uncoded-medications-report.html
2022-08-08T04:32:58
CC-MAIN-2022-33
1659882570765.6
[array(['uncoded-medications-report.images/image1.png', None], dtype=object) array(['uncoded-medications-report.images/image2.png', None], dtype=object)]
docs.webchartnow.com
So much has been learned and written about how to find and fix storage related performance problems that I wondered if there’s anything new that can be done. There are tons of best practices, monitoring products, and professional services all dedicated to solving this often quixotic burden. But at the end of the day, after all the research, analysis, meetings, escalations, finger-pointing, begging, threats, firings, and money thrown at it, there remains one final act that throws more terror into the hearts of men and women than anything else. But before we go there, let’s briefly recall the trip to the solution. It usually starts when an application user complains of an abnormally slow response time. IT mobilizes to uncover the problem and the search for the guilty begins. Everyone then proceeds to try to prove the innocence of their domain. Vendors are brought in to prove that it’s someone else’s fault. Meetings are held, fingers are pointed, reports are reviewed, but eventually, a solution is suggested. It could be a configuration change, a hardware upgrade, all new gear, or even new application software. The decision is made, the money is spent, and everyone holds their breaths and crosses their fingers as the change goes “live” in production. The moment of terror. We’ve all been there. Sometimes the problem goes away, sometimes it doesn’t, and sometimes, the change makes it worse. More often than not, the whole process is repeated. So what’s the missing link? There’s no way to know if the change is actually the fix. Or is there? What if you could replicate the problem workload in a lab environment, allowing you to test the change(s) in a controlled, pre-production manner? Well, you can. You can employ the Load DynamiX Workload Sensor to capture the problem application workload, and replay it in the lab with the Load DynamiX Workload Generation Appliance. You can test your suspected fixes until you actually find the real solution. Why hasn’t everyone been doing this all along? They couldn’t. Mostly, it’s because the synthetic workload models weren’t accurate enough for troubleshooting. They were good enough for a lot of other purposes, but they weren’t based on incredibly granular, actual production I/O – essentially a real-time capture of your production workload profiles. What’s new is the Load DynamiX Workload Sensor, just announced and shipped in December 2015. The Workload Sensor is used to build the application workload model from the actual production workload. No guesswork. No terror. To learn more, click here, or better yet, talk with a Load DynamiX representative and ask to see a demo. by Jim Bahn Product Marketing
http://docs.virtualinstruments.com/blog/finding-the-missing-link-in-storage-performance-troubleshooting/
2018-07-15T21:12:54
CC-MAIN-2018-30
1531676588972.37
[]
docs.virtualinstruments.com
Portfolio Management With the ServiceNow® Portfolio Management application, you can create portfolios which are collections of related programs, projects, and demands. You can then perform financial planning and monitor the status and progress of these portfolios. You must have the it_portfolio_manager role to manage a portfolio. The Portfolio Management application provides these capabilities to the portfolio manager: Create a portfolio by adding related programs, projects, and demands. Perform annual portfolio planning by selecting demands, projects, and programs. Track the progress and status of all the programs, projects, and demands that are part of the portfolio. You can track the costs, resources, schedules, risks, and issues. The following diagram provides an overview of Portfolio Management. Figure 1. Overview of Portfolio Management Features Portfolio Management also provides the following features: Portfolio workbench The portfolio workbench provides a central location to view and monitor the progress of the program, the projects, and demands that are part of the portfolio. You can also perform annual portfolio planning, create budget and forecast plans for the portfolio. Annual planning for the portfolio The annual planning wizard is available in the portfolio workbench. The annual planning process comprises these steps: Determine the overall cost requirements for the portfolio and set the target. Select the demands and projects for a fiscal year based on the budget target and resource availability. Create and promote a budget plan. If required, re-promote the budget plan by performing a what-if analysis by adding or removing projects and demands before the budget is finalized. Budget forecasting of the portfolio Using the portfolio workbench, the portfolio managers can re-estimate (forecast) the portfolio budget for future periods based on the actual cost and changed project requirements. Tracking of the portfolio Once the financial planning is complete, portfolio workbench allows you to track the progress of a portfolio. This tracking includes the actual amount being spent against the budget, actual hours spent, risks, and issues. Portfolio Manager dashboard The Portfolio Manager dashboard provides a central location to generate different graphical reports of the portfolio and portfolio financials..
https://docs.servicenow.com/bundle/istanbul-it-business-management/page/product/project-management/concept/c_PortfolioManagement.html
2018-07-15T21:27:06
CC-MAIN-2018-30
1531676588972.37
[]
docs.servicenow.com
After connecting to a View server, you can use. Procedure - Either open a terminal window and enter vmware-view or search the applications for VMware Horizon Client, and double-click the icon. -. - . - Double-click a remote desktop or application to connect. client window appears. View.
https://docs.vmware.com/en/VMware-Horizon-Client-for-Linux/4.1/com.vmware.horizon-client.linux-41.doc/GUID-40103CDD-3FA9-4E54-BF32-0FD21B647C1B.html
2018-07-15T21:36:48
CC-MAIN-2018-30
1531676588972.37
[]
docs.vmware.com
How to use Twilio for voice and SMS capabilities from allows your applications to make and receive phone calls. Twilio SMS enables your applications to send and receive SMS messages. Twilio Client allows you to make VoIP calls from any phone, tablet, or browser and supports WebRTC. Twilio Pricing and Special Offers Azure customers receive a special offer: complimentary caller's voice and returns. Create a Twilio Account When you're ready to get a Twilio account, sign up at Try Twilio. You can start with a free account, and upgrade your account later. When you sign up for a Twilio account, you'll. Create an Azure Application An Azure application that hosts a Twilio enabled application is no different from any other Azure application. You add the Twilio .NET library and configure the role to use the Twilio .NET libraries. For information on creating an initial Azure project, see Creating an Azure project with Visual Studio. Configure Your Application to use Twilio Libraries can be installed using the NuGet package manager extension available for Visual Studio 2010 up to 2015. The source code is hosted on GitHub,. Note To install the latest version of NuGet, you must first uninstall the loaded version using the Visual Studio Extension Manager. To do so, you must run Visual Studio as administrator. Otherwise, the Uninstall button is disabled. To add the Twilio libraries to your Visual Studio project: - Open your solution in Visual Studio. - Right-click References. - Click Manage NuGet Packages... - Click Online. - In the search online box, type twilio. - Click Install on the Twilio package. How to: Make an outgoing call The following shows how to make an outgoing call using the CallResource class. This code also uses a Twilio-provided site to return the Twilio Markup Language (TwiML) response. Substitute your values for the to and from phone numbers, and ensure that you verify the from phone number for your Twilio account before running); // Use the Twilio-provided site for the TwiML response. var url = ""; url = $"{url}?Message%5B0%5D=Hello%20World"; // Set the call From, To, and URL values to use for the call. // This sample uses the sandbox number provided by // Twilio to make the call. var call = CallResource.Create( to: new PhoneNumber("+NNNNNNNNNN"), from: new PhoneNumber("NNNNNNNNNN"), url: new Uri(url)); } For more information about the parameters passed in to the CallResource.Create method, see. As mentioned, this code uses a Twilio-provided site to return the TwiML response. You could instead use your own site to provide the TwiML response. For more information, see How to: Provide TwiML responses from your own website. How to: Send an SMS message The following screenshot shows how to send an SMS message using the MessageResource class. The from number is provided by Twilio for trial accounts to send SMS messages. The to number must be verified for your Twilio account before you run); try { // Send an SMS message. var message = MessageResource.Create( to: new PhoneNumber("+12069419717"), from: new PhoneNumber("+14155992671"), body: "This is my SMS message."); } catch (TwilioException ex) { // An exception occurred making the REST call Console.WriteLine(ex.Message); } How to: Provide TwiML Responses from your own website When your application initiates a call to the Twilio API - for example, via the CallResource.Create method - Twilio sends your request to an URL that is expected to return a TwiML response. The example in How to: Make an outgoing call uses the Twilio-provided URL to return the response. Note While TwiML is designed for use by web services, you can view the TwiML in your browser. For example, click to see an empty <Response> element; as another example, click to see a <Response> element that contains a <Say> element. Instead of relying on the Twilio-provided URL, you can create your own URL site that returns HTTP responses. You can create the site in any language that returns HTTP responses. This topic assumes you'll be hosting the URL from an ASP.NET generic handler. The following ASP.NET Handler crafts a TwiML response that says Hello World on the call. using System.Text; using System.Web; namespace WebRole1 { /// <summary> /// Summary description for Handler1 /// </summary> public class Handler1 : IHttpHandler { public void ProcessRequest(HttpContext context) { const string twiMLResponse = "<Response><Say>Hello World.</Say></Response>"; context.Response.Clear(); context.Response.ContentType = "text/xml"; context.Response.ContentEncoding = Encoding.UTF8; context.Response.Write(twiMLResponse); context.Response.End(); } public bool IsReusable { get { return false; } } } } As you can see from the example above, the TwiML response is simply an XML document. The Twilio.TwiML library contains classes that will generate TwiML for you. The example below produces the equivalent response as shown above, but uses the VoiceResponse class. using System.Web; using Twilio.TwiML; namespace WebRole1 { /// <summary> /// Summary description for Handler1 /// </summary> public class Handler1 : IHttpHandler { public void ProcessRequest(HttpContext context) { var twiml = new VoiceResponse(); twiml.Say("Hello World."); context.Response.Clear(); context.Response.ContentType = "text/xml"; context.Response.Write(twiml.ToString()); context.Response.End(); } public bool IsReusable { get { return false; } } } } For more information about TwiML, see. Once you have set up a way to provide TwiML responses, you can pass that URL to the CallResource.Create method. For example, if you have a web application named MyTwiML deployed to an Azure cloud service, and the name of your ASP.NET Handler is mytwiml.ashx, the URL can be passed to CallResource.Create as shown in the following code sample: // This sample uses the sandbox number provided by Twilio to make the call. // Place the call. var call = CallResource.Create( to: new PhoneNumber("+NNNNNNNNNN"), from: new PhoneNumber("NNNNNNNNNN"), url: new Uri("http://<your_hosted_service>.cloudapp.net/MyTwiML/mytwiml.ashx")); } For additional information about using Twilio on Azure with ASP.NET, see How to make a phone call using Twilio in a web role:
https://docs.microsoft.com/en-us/azure/twilio-dotnet-how-to-use-for-voice-sms
2018-07-15T21:26:25
CC-MAIN-2018-30
1531676588972.37
[]
docs.microsoft.com
Table of contents If you are experiencing WMI connection problems, you can use wbemtest, a WMI testing tool included with Windows, to test the connection independently of Octopus. Part 1: Test connectivity To start wbemtest, enter wbemtest at the Start > Run command box In wbemtest, click the "Connect" button. Change root\cimv2 by \\COMPUTERNAME\root\cimv2 (where COMPUTERNAME is the name or IP of the host you want to connect to). - Click "Connect". If you receive an error message at the stage, it is because the connection is not working. The error message might be useful for troubleshooting. Part 2: Test information retrieval Click the "Query" button Enter this query: select * from win32_operatingsystem Click "Apply". The test succeeded if you get a dialog box called "Query Result". The test failed if you get a dialog called "Error". The error message might be useful for troubleshooting. To create additionnal queries, use the following format: - select "name of properties to inspect" from "name of class to inspect". - Example: - To find the IP address field of the NetworkAdapterConfiguration class, use the following query - select IPAddress from Win32_NetworkAdapterConfiguration Requests used to extract information to transfer to Octopus - select * from Win32_BIOS - select * from Win32_ComputerSystem - select * from Win32_OperatingSystem - select * from Win32_Processor - select * from Win32_VideoController - select * from Win32_NetworkAdapterConfiguration - select * from Win32_NetworkAdapter - select * from Win32_Desktop - select * from Win32_Printer - select * from Win32_DiskDrive Thank you, your message has been sent.
https://docs.octopus-itsm.com/en/articles/wmiupdater-testing-outside-octopus
2018-07-15T21:08:23
CC-MAIN-2018-30
1531676588972.37
[]
docs.octopus-itsm.com
Add Live Feed to your homepage You can add Live Feed to your own homepage or to a global homepage. Navigate to a homepage. Click the add content icon () in top left corner of the homepage. Select Live Feed in the left panel. On the bottom of the window, click Add here in the appropriate layout position, then close the window. Figure 1. Homepage Note: Administrators can add live feed to a global homepage to make it available for all homepage users by default. Users with any role can add live feed to their homepage; however, administrators can restrict this ability.
https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/use/live-feed/task/t_AddLiveFeedToYourHomepage.html
2018-07-15T21:29:04
CC-MAIN-2018-30
1531676588972.37
[]
docs.servicenow.com
Use RESTful service URLs¶ Under REST principles, a URL identifies a resource. The following URL design patterns are considered REST best practices: - URLs should include nouns, not verbs. - Use plural nouns only for consistency (no singular nouns). - Use HTTP methods (HTTP/1.1) to operate on these resources: - Use HTTP response status codes to represent the outcome of operations on resources. Should Agencies should consistently apply RESTful design patterns for API URLs. Versioning¶ Example of an API URL that contains a version number: GET /v1/path/to/resource HTTP/1.1 Host: Accept: application/json, text/javascript May An API URL may contain a version number. See versioning for more details. Should If an API URL does not contain a version number (anonymous version), then it should be understood that it always refers to the latest version. Should Not Anonymous API versions should not be considered stable, because the latest version changes with each release. Formats¶ Allow users to request formats like JSON or XML, for example: URL Depth¶ The resource/identifier/resource URL pattern is sufficient for full attribute visibility between any resources. Therefor, this URL depth is usually sufficient to support any arbitrary resource graph. If your URL design goes deeper than resource/identifier/resource, it may be evidence that the granularity of your API is too coarse. Recommended Avoid URL designs deeper than resource/identifier/resource. May If your API has URLs deeper than resource/identifier/resource, consider revisiting the granularity of your API design. API Payload formats¶ To interact with an API, a consumer needs to be able to produce and consume messages in the appropriate format. For a sucessful interaction both parties would need to be able to process (parse and generate) these messages. Should Not Agency APIs should not produce or consume messages in a propietary format. This is because open formats maximise interoperability and reduce costs and risks associated with API utilisation. May Agency APIs may support multiple (open) payload formats. For example, it is not unusual for an API endpoint to support both JSON and XML formats. API Payload format encoding¶ To interact with an API, the consumer needs to know how the payload is encoded. This is true regardless of how many encoding formats the endpoint supports. Should Not Agencies should not rely on documentation alone to inform consumers about payload encoding. This is generally poor affordance. The three patterns of payload format encoding most frequently found in the wild are: - HTTP headers (e.g. Content-Type: and Accept:) - GET parameters (e.g. &format=json) - resource label (e.g. /foo.json) Using HTTP headers to specifying payload format can be convenient, however unfortunately not all clients handle headers consistently. Using HTTP headers alone will create issues for buggy clients. Using GET parameters to specify format is another common pattern for specifying the encoding of API payloads. This results in slightly longer URLs than resource label technique, and can occasionally create problems with caching behavior of some proxy servers. Resource label specification of API payload format, such as /foo/{id}.json, are functionally equivalent to GET parameter encoding but without the (admittedly rare) proxy caching issues. Should Agency APIs should consider supplimenting URL-based format speficications with HTTP header based format specification (e.g. Content-Type: and Accept:). Should Agency APIs should consider use resource labels to indicate payload format, e.g. /foo/{id}.json. Should If GET parameter based payload format specification is chosen, the potential impact of proxy caching and URL length issues should be evaluated. Good RESTful URL examples¶ List of magazines: GET /api/v1/magazines.json HTTP/1.1 Host: Accept: application/json, text/javascript Filtering and sorting are server-side operations on resources: GET /api/v1/magazines.json?year=2011&sort=desc HTTP/1.1 Host: Accept: application/json, text/javascript A single magazine in JSON format: GET /api/v1/magazines/1234.json HTTP/1.1 Host: Accept: application/json, text/javascript All articles in (or belonging to) this magazine: GET /api/v1/magazines/1234/articles.json HTTP/1.1 Host: Accept: application/json, text/javascript All articles in this magazine in XML format: GET /api/v1/magazines/1234/articles.xml HTTP/1.1 Host: Accept: application/json, text/javascript Specify query parameters in a comma separated list: GET /api/v1/magazines/1234.json?fields=title,subtitle,date HTTP/1.1 Host: Accept: application/json, text/javascript Add a new article to a particular magazine: Bad RESTful URL examples¶ Non-plural noun: GET /magazine HTTP/1.1 Host: Accept: application/json, text/javascript GET /magazine/1234 HTTP/1.1 Host: Accept: application/json, text/javascript Verb in URL: GET /magazine/1234/create HTTP/1.1 Host: Accept: application/json, text/javascript Filter outside of query string: GET /magazines/2011/desc HTTP/1.1 Host: Accept: application/json, text/javascript
http://apiguide.readthedocs.io/en/latest/build_and_publish/use_RESTful_urls.html
2018-07-15T21:08:06
CC-MAIN-2018-30
1531676588972.37
[]
apiguide.readthedocs.io
Refunding Orders This article applies to Contextual Commerce. (Looking for Classic Commerce documentation?) To refund a customer's order, login and, if needed, choose your Dashboard. A search box will appear. Tutorial Video Note Order Search Use the Order Search to find the order you want to refund. Enter the order information and then press Enter or Return on your keyboard to execute the search. Order Search Criteria Order and subscription searches are case insensitive, and you can search for an order by entering any of the following: - An exact order reference number (e.g. FUR161028-6610-37108) - A customer's full email address (e.g. [email protected]) - A customer's email domain name, beginning with the @ sign (e.g. @abc.com) - A customer's exact last name (e.g. anderson) Note: You cannot search by a customer's full name. - The beginning letters (at least four letters, followed by *) of a customer's last name (e.g. ande*) - The exact company name (e.g. fastspring) - The beginning letters (at least four letters, followed by *) of a company name (e.g. fast*) - For credit card orders, the last 4 or 5 digits of a the credit card number (e.g. 54321 or 4321) Refund Process It takes under a minute to refund an order. After you have found the order you want to refund, click Options and then select Return / Refund Order from the drop-down menu. Select the Return Type. - A Full Return will refund the entire order's purchase price. - A Partial Amount Refund will refund a portion of the original price of an order. This is most commonly used when customers notify you that they forgot to enter a coupon code during the order process. Note: This amount does not include any tax, which will be calculated automatically (based on the value entered) and also refunded. Complete the Reason and Notification section of the refund process. Choose the Reason Type from the drop-down menu. Optionally enter in a Reason Note to provide additional information about the refund, which will be visible to the customer. The maximum length of the Reason Note is 250 characters. Select the Trigger from the drop-down menu. If you are performing the refund, you will likely use Client Request. Under Customer Notification, select if you want to Notify the Original Contact or whether you want to send No Notification to your customer. Click Next. Your Return will be in a state of pending. Review your Return. If everything is correct, click Confirm. If there are any errors, click Options and select Cancel from the drop-down menu. Once you click Confirm, a pop-up will appear notifying you that Returns are not reversible. Click OK to again confirm the return. Your Return will not be in the processed state and you are done. Order Refund Policy and Chargebacks Refund Policy When we get a refund request from a customer, we usually direct the request to your support contact so that you may handle it. The only time that we refund an order without consulting you is when a customer who has purchased with a credit card has contacted us about a fraudulent charge. In this case, the order is refunded to prevent a chargeback. Credit cards refunds can generally be done up to one year after an order, but PayPal has a 90 day refund limit. The refund fee is 3.5% of the order subtotal. or a certain percentage of the revenue that is left after the FastSpring fee. We often get asked what happens when a refund happens in this circumstance. To answer this question, we will use two examples, based on the way you have asked us to set up your split accounts. Option 1: Fixed Percent of Sale Price $70.00 (70% of $100) goes into the partner's account, leaving your account with $21.10. If there is a full refund, the $70.00 is removed from the partner's account and put back into your account. The FastSpring fee of $8.90 is put back into your account. The $100 is returned to the customer, and there is a $3.50 fee assessed to your company as a return fee. Option 2: Fixed Percent of the Revenue $63.77 (70% of $100 - $8.90) goes into the partner's account, leaving your account with $27.33. If there is a full refund, the $63.77 is removed from the partner's account and put back into your account. The FastSpring fee of $8.90 is put back into your account. The $100 is returned to the customer, and there is a $3.50 fee assessed to your company as a return fee. Refunds for Bank Transfers Normally.
http://docs.fastspring.com/activity-events-orders-and-subscriptions/refunding-orders
2018-07-15T21:08:16
CC-MAIN-2018-30
1531676588972.37
[array(['/files/12910608/21532114/3/1477949310000/search.png', 'Example of searching for an order using the order reference'], dtype=object) array(['/files/12910608/21532124/2/1481670028000/Options+menu.png', 'Example of the order search results when searching using the order reference, with the OPTIONS command expanded'], dtype=object) array(['/files/12910608/14680149/1/1438977940000/image2015-8-7+16%3A5%3A47.png', None], dtype=object) array(['/files/12910608/14680150/1/1438978017000/image2015-8-7+16%3A7%3A4.png', None], dtype=object) array(['/files/12910608/14680152/1/1438978076000/image2015-8-7+16%3A8%3A3.png', None], dtype=object) array(['/files/12910608/14680153/1/1438978153000/image2015-8-7+16%3A9%3A20.png', None], dtype=object) array(['/files/12910608/14680154/1/1438978193000/image2015-8-7+16%3A10%3A1.png', None], dtype=object) array(['/files/12910608/14680155/1/1438978419000/image2015-8-7+16%3A13%3A46.png', None], dtype=object) array(['/files/12910608/14680156/1/1438978477000/image2015-8-7+16%3A14%3A44.png', None], dtype=object) array(['/files/12910608/14680158/1/1438978578000/image2015-8-7+16%3A16%3A26.png', None], dtype=object) ]
docs.fastspring.com
About the Rotate View Tool T-LAY-001-005 The rotation angles for the Camera and Drawing views are independent; if you rotate the Drawing view 25 degrees, if you switch to the Camera view, you can use the rotary table in that view and rotate it to a different angle without affecting the settings in the other view. When using the Rotate View tool, you are only rotating your workspace. You can not changing the actual rotation angle of your drawings. Exporting your project will completely ignore the Rotate View angle. The Rotate View tool has no options in the Tool Properties view.
https://docs.toonboom.com/help/harmony-15/premium/drawing/about-rotate-view-tool.html
2018-07-15T21:12:02
CC-MAIN-2018-30
1531676588972.37
[array(['../Resources/Images/HAR/Stage/Character_Design/HAR11_SketchModel_Rotary_Table.png', 'Rotary Table Rotary Table'], dtype=object) ]
docs.toonboom.com
Difference between revisions of "How to check if mod rewrite is enabled on your server" From Joomla! Documentation Latest revision as of 17:17, 18 October 2012 Many enabled! 1. Enable SEO in your administrator: In Joomla 1.0: Site -> Global Configuration -> SEO: Search Engine Friendly URLs to Yes. In Joomla 1.5: Site -> Global Configuration -> Site: Search Engine Friendly URLs to Yes, Use Apache mod_rewrite to Yes. (Setting Add suffix to URLs is optional). 2. Rename your htaccess.txt to .htaccess: Next place ONLY the following lines in your .htaccess: RewriteEngine On Options +FollowSymLinks RewriteRule ^joomla\.html? [R=301=301,L]
https://docs.joomla.org/index.php?title=How_to_check_if_mod_rewrite_is_enabled_on_your_server&diff=76741&oldid=31301
2016-04-29T00:54:51
CC-MAIN-2016-18
1461860109993.2
[]
docs.joomla.org
Difference between revisions of "Plugin Development" From Joomla! Documentation Revision as of 15:13, 8 October 2013 <translate> This page contains many links to documentation concerning Plugin Development for and . A good place to start is with the Reading list ( Development Articles:</translate>
https://docs.joomla.org/index.php?title=Portal:Plugin_Development&diff=104261&oldid=31105
2016-04-29T00:35:37
CC-MAIN-2016-18
1461860109993.2
[]
docs.joomla.org
Changes related to "How do you assign a module to a position?" ← How do you assign a module to a position? This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold. No changes during the given period matching these criteria.
https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&limit=50&target=How_do_you_assign_a_module_to_a_position%3F
2016-04-29T00:15:58
CC-MAIN-2016-18
1461860109993.2
[]
docs.joomla.org
Please note that due to changes in the twitter API, jTweet is no longer available. This page is an archive of the original jTweet functionality. General Module. You can use the free JB Library plugin, our Zen Grid Framework plugin or another third party extension that loads jQuery. Tweets are not loading, I see empty module, why? This is possible for a few reasons. Most probably you didn't drag and drop avaiable items. Also if there is an incorrect user, then it may not display anything. Also, make sure you don't have your tweets protected, or else there is no way to get them from Twitter since they are blocked for public access.
http://docs.joomlabamboo.com/joomla-extensions/jtweet-v2-documentation
2016-04-28T23:48:53
CC-MAIN-2016-18
1461860109993.2
[]
docs.joomlabamboo.com
This part of the reference documentation explains the core functionality that Spring for Apache Hadoop (SHDP) provides to any Spring based application. Chapter 3, Hadoop Configuration, MapReduce, and Distributed Cache describes the Spring support for bootstrapping, initializing and working with core Hadoop. Chapter 4, Working with the Hadoop File System describes the Spring support for interacting with the Hadoop file system. Chapter 5, Working with HBase describes the Spring support for HBase. Chapter 6, Hive integration describes the Hive integration in SHDP. Chapter 7, Pig support describes the Pig support in Spring for Apache Hadoop. Chapter 8, Cascading integration describes the Cascading integration in Spring for Apache Hadoop. Chapter 10, Security Support describes how to configure and interact with Hadoop in a secure environment.
http://docs.spring.io/spring-data/hadoop/docs/1.0.1.RELEASE/reference/html/core.html
2016-04-28T23:58:35
CC-MAIN-2016-18
1461860109993.2
[]
docs.spring.io
Information for "JLDAP/getDN" Basic information Display titleAPI15:JLDAP/getDN Default sort keyJLDAP/getDN Page length (in bytes)924 Page ID7:14,’
https://docs.joomla.org/index.php?title=API15:JLDAP/getDN&action=info
2016-04-29T01:39:45
CC-MAIN-2016-18
1461860109993.2
[]
docs.joomla.org
Difference between revisions of "Finder Integration Working Group" From Joomla! Documentation Revision as of 07:06, 24 September 2011. Documentation Finder. Status Below is the status as of 18 September 2011: -) - - Site Module - The module appears to be working as intended - Content Plugin (former System Plugin for highlighting) - The plugin appears to be working as intended - Work is in progress to convert it to a reusable content plugin for use with all extensions - A pull request is active against the Platform adding the highlighter HTML Helper and media (Pull Request 371 on Platform) - ).
https://docs.joomla.org/index.php?title=Finder_Integration_Working_Group&diff=62186&oldid=62183
2016-04-29T00:00:33
CC-MAIN-2016-18
1461860109993.2
[]
docs.joomla.org
Difference between revisions of "Components Newsfeeds Categories Edit" <translate> Category Details </translate> <translate> - Title. The Title for this item. This may or may not display on the page, depending on the parameter values you choose.</translate> <translate> -".</translate> <translate> - Description. The description for the item. Category, Subcategory and Web Link descriptions may be shown on web pages, depending on the parameter settings. These descriptions are entered using the same editor that is used for Articles.</translate> [[File:Help30-Article-Image-ToggleEditor-buttons-<translate> en</translate>.png]] <translate> - Article. Click to quickly add an 'Article' link to the description with a popup window.</translate> <translate> - Image. Click to quickly add an 'Image' to the description with a popup window.</translate> <translate> - Page Break. Click to quickly add a 'Page Break' to the description in order to create a paginated article. The location of the page break will be displayed as a simple horizontal line.</translate> <translate> - Read More. Click to quickly add a 'Read more...' link to the description.</translate> [[File:Help30-Article-Image-ToggleEditor-button-<translate> en</translate>.png]] <translate> - Toggle Editor. Turns on or off the editor's description box WYSIWYG features to show HTML markup.</translate> <translate> </translate> <translate> - Parent. The item (category, menu item, and so on) that is the parent of the item being edited.</translate> <translate> - Status. (Published/Unpublished/Archived/Trashed) The published status of the item.</translate> <translate> - Access Level. Who has access to this item. Default options are:</translate> <translate> - Public: Everyone has access</translate> <translate> - Guest: Everyone has access</translate> <translate> - Registered: Only registered users have access</translate> <translate> - Special: Only users with author status or higher have access</translate> <translate> - Super Users: Only super users have access</translate> <translate> Enter the desired level using the drop-down list box. Custom Access Control Levels created will show if they exist.</translate> <translate> - Language. Item language.</translate> <translate> -).</translate> <translate> - Note. Item note. This is normally for the site administrator's use (for example, to document information about this item) and does not show in the front end of the site.</translate> <translate> - Version Note. Optional field to identify this version of the item in the item's Version History window.</translate> <translate> Publishing Options </translate> <translate> This section shows Publishing Options parameters for this Category, as shown below when tab is clicked:</translate> [[File:Help30-Categories-Edit-screen-publish-options-tab-<translate> en</translate>.png|670px]] <translate> The grayed out fields are for information only and may not be edited.</translate> <translate> - Created Date. Date the item(Article, Category, Weblink, etc.) was created.</translate> <translate> - Created by. Optional, choose from a popup window of users. Select User by clicking on the user's name. Defaults to user creating new category if left blank.</translate> <translate> - Modified Date. (Informative only) Date of last modification.</translate> <translate> - Modified By. (Informative only) Username who performed the last modification.</translate> <translate> - Hits. Number of hits on a Category view.</translate> <translate> - ID. The unique ID number automatically assigned to this item by Joomla!. This number cannot be changed.<> - Author. Optional entry for an Author name within the metadata. If entered, this creates an HTML meta element with the name attribute of "author" and the content attribute as entered here.<> Permissions </translate> <translate> This section shows permissions for this category. The screen shows as follows.</translate> [[File:Help30-Categories-Edit-screen-permissions-tab-<translate> en</translate>.png|670px]] <translate> To change the permissions for this category, do the following.</translate> <translate> - Select the Group by clicking its title located on the left.</translate> <translate> - Find the desired Action. Possible Actions are:</translate> <translate> - Create: Users can create this category.</translate> <translate> - Delete: Users can delete this category.</translate> <translate> - Edit: Users can edit this category.</translate> <translate> - Edit State: User can change the published state and related information for this category.</translate> <translate> - Edit Own: Users can edit own category created.</translate> <translate> - Select the desired permission for the action you wish to change. Possible settings are:</translate> <translate> - Inherited: Inherited for users in this Group from the Global Configuration, Article Manager Options, or Category permissions.</translate> <translate> - Allowed: Allowed for users in this Group. Note that, if this action is Denied at one of the higher levels, the Allowed permission here will not take effect. A Denied setting cannot be overridden.</translate> <translate> - Denied: Denied for users in this Group.</translate> <translate> - Click Save in Toolbar at top. When the screen refreshes, the Calculated Setting column will show the effective permission for this Group and Action.</translate> <translate> Options </translate> <translate> This shows Options for this Category, as shown below when tab is clicked:</translate> [[File:Help30-Categories-Edit-screen-options-tab-<translate> en</translate>.png]] <translate> - Alternative Layout. Use a different layout from the supplied components view or overrides in the templates.</translate> <translate> - Image. Choose an image to be displayed with this item/category in the front-end.</translate> <translate> - Alt Text. Alternative text used for visitors without access to images.</translate> Toolbar At the top left you will see the toolbar for a Edit Item or New Item Category Manager: Edit A Newsfeeds. Category Manager: Add A New Newsfeeds
https://docs.joomla.org/index.php?title=Help32:Components_Newsfeeds_Categories_Edit&diff=next&oldid=104434
2016-04-29T00:22:38
CC-MAIN-2016-18
1461860109993.2
[]
docs.joomla.org
Changes related to "Higher Education Websites Using Joomla" ← Higher Education Websites Using Joomla This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold. No changes during the given period matching these criteria.
https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&from=20130207160207&target=Higher_Education_Websites_Using_Joomla
2016-04-29T00:16:25
CC-MAIN-2016-18
1461860109993.2
[]
docs.joomla.org
Overview What is Alluxio Alluxio unifies data at memory-speed. It is the world’s first virtual distributed storage system. It bridges the gap between computation frameworks and storage systems, enabling applications to connect to numerous storage systems through a common interface. Alluxio’s memory-centric architecture enables data access at speeds orders of magnitude faster than existing solutions. In the data ecosystem, Alluxio lies between data driven applications, such as Apache Spark, Presto, Tensorflow, Apache HBase, Apache Hive, or Apache Flink, and various kinds of persistent storage systems, such as Amazon S3, Google Cloud Storage, OpenStack Swift, GlusterFS, HDFS, IBM Cleversafe, EMC ECS, Ceph, NFS, and Alibaba OSS. Alluxio unifes the data stored in these different storage systems, presenting unified client APIs and a global namespace to its upper layer data driven applications. The Alluxio project originated from AMPLab, UC Berkeley (see papers). It is open source under Apache License 2.0 and is deployed at a wide range of leading companies in the world. It is one of the fastest growing open source projects. In a span of five years, Alluxio has attracted more than 800 contributors from over 200 institutions including Alibaba, Alluxio, Baidu, CMU, Google, IBM, Intel, NJU, Red Hat, Tencent, UC Berkeley, and Yahoo. The project is the data layer of the Berkeley Data Analytics Stack (BDAS) and is also part of the Fedora distribution. Today, Alluxio is deployed in production by hundreds of organizations and runs on clusters exceeding 1,000 nodes. Downloads | User Guide | Developer Guide | Meetup Group | Issue Tracking | Community Slack Channel | User Mailing List | Videos | Github | Releases Cache:. Downloads and More Released versions of Alluxio are available from the Project Downloads Page. Each release comes with prebuilt binaries compatible with various Hadoop versions. Building From Master Branch Documentation explains how to build the project from source code. Questions can be directed to our User Mailing List. Users who can not access the Google Group may use its mirror Note that the mirror does not have information before May 2016.
https://docs.alluxio.io/os/user/1.8/en/Overview.html
2020-08-03T13:05:41
CC-MAIN-2020-34
1596439735810.18
[array(['../img/stack.png', 'Ecosystem'], dtype=object)]
docs.alluxio.io
What is changing with the backup sets This article shows the backup set naming related changes for the Windows/Linux Server workload pages in the Phoenix console. There are similar changes to the NAS and MS-SQL Server workloads. - A new Backup Set Name field is added to the Create File Backup Set dialog box, which contains a unique name for the backup set. You can rename the backup set as required. The Backup Set Name field is not visible in case of the bulk configuration. - Modify the name of the backup set as per your requirement in the Edit Backup Set dialog box. - View the list of backup sets in your Windows/Linux server workload under the File Backup Sets tab. The tooltip on the backup set shows the server name and FQDN hostname. Search for a specific backup set by its name. Type the backup set name in the search box to start the search. - View the details of each backup set associated with a server on the Server Details page. - View the list of backup sets that are part of the backup policy on the Backup policy page. - View the list of jobs triggered for a backup set on the Jobs page. Search for a backup set job by its name on the Jobs page. - Now, view and select a backup set that is attached to a CloudCache. Select one or multiple backup sets corresponding to the administrative group that you want to map in the Attach More Backup Sets dialog box. - View the backup sets attached to your seeding device. Attach backup sets to the seeding device. Type the backup set name in the search box to quickly search for a specific backup set. - Phoenix reports - The following reports now provide information based on the backup set: - Backup Activity Report - Restore Activity Report - Resource Status Report - Storage Consumption by Backup Sets Report 10. View the backup set name on the audit trails details page.
https://docs.druva.com/Backup_Set_Naming_Changes
2020-08-03T12:05:11
CC-MAIN-2020-34
1596439735810.18
[array(['https://docs.druva.com/@api/deki/files/51939/backup_set_name_audit_trail.png?revision=1', 'backup_set_name_audit trail.png'], dtype=object) ]
docs.druva.com
[−][src]Macro tokio:: feature="macros"only. Wait on multiple concurrent branches, returning when the first branch completes, cancelling the remaining branches. The select! macro must be used inside of async functions, closures, and blocks. The select! macro accepts one or more branches with the following pattern: <pattern> = <async expression> (, if <precondition>)? => <handler>, Additionally, the select! macro may include a single, optional else branch, which evaluates if none of the other branches match their patterns: else <expression> The macro aggregates all <async expression> expressions and runs them concurrently on the current task. Once the first expression completes with a value that matches its <pattern>, the select! macro returns the result of evaluating the completed branch's <handler> expression. Additionally, each branch may include an optional if precondition. This precondition is evaluated before the <async expression>. If the precondition returns false, the branch is entirely disabled. This capability is useful when using select! within a loop. The complete lifecycle of a select! expression is as follows: - Evaluate all provided <precondition>expressions. If the precondition returns false, disable the branch for the remainder of the current call to select!. Re-entering select!due to a loop clears the "disabled" state. - Aggregate the <async expression>s from each branch, including the disabled ones. If the branch is disabled, <async expression>is still evaluated, but the resulting future is not polled. - Concurrently await on the results for all remaining <async expression>s. - Once an <async expression>returns a value, attempt to apply the value to the provided <pattern>, if the pattern matches, evaluate <handler>and return. If the pattern does not match, disable the current branch and for the remainder of the current call to select!. Continue from step 3. - If all branches are disabled, evaluate the elseexpression. If none is provided, panic. Notes Runtime characteristics By running all async expressions on the current task, the expressions are able to run concurrently but not in parallel. This means all expressions are run on the same thread and if one branch blocks the thread, all other expressions will be unable to continue. If parallelism is required, spawn each async expression using tokio::spawn and pass the join handle to Avoid racy if preconditions Given that if preconditions are used to disable select! branches, some caution must be used to avoid missing values. For example, here is incorrect usage of delay with if. The objective is to repeatedly run an asynchronous task for up to 50 milliseconds. However, there is a potential for the delay completion to be missed. use tokio::time::{self, Duration}; async fn some_async_work() { // do work } #[tokio::main] async fn main() { let mut delay = time::delay_for(Duration::from_millis(50)); while !delay.is_elapsed() { tokio::select! { _ = &mut delay, if !delay.is_elapsed() => { println!("operation timed out"); } _ = some_async_work() => { println!("operation completed"); } } } } In the above example, delay.is_elapsed() may return true even if delay.poll() never returned Ready. This opens up a potential race condition where delay expires between the while !delay.is_elapsed() check and the call to select! resulting in the some_async_work() call to run uninterrupted despite the delay having elapsed. One way to write the above example without the race would be: use tokio::time::{self, Duration}; async fn some_async_work() { // do work } #[tokio::main] async fn main() { let mut delay = time::delay_for(Duration::from_millis(50)); loop { tokio::select! { _ = &mut delay => { println!("operation timed out"); break; } _ = some_async_work() => { println!("operation completed"); } } } } Fairness select! randomly picks a branch to check first. This provides some level of fairness when calling select! in a loop with branches that are always ready. Panics select! panics if all branches are disabled and there is no provided else branch. A branch is disabled when the provided if precondition returns false or when the pattern does not match the result of ` Examples Basic select with two branches. async fn do_stuff_async() { // async work } async fn more_async_work() { // more here } #[tokio::main] async fn main() { tokio::select! { _ = do_stuff_async() => { println!("do_stuff_async() completed first") } _ = more_async_work() => { println!("more_async_work() completed first") } }; } Basic stream selecting. use tokio::stream::{self, StreamExt}; #[tokio::main] async fn main() { let mut stream1 = stream::iter(vec![1, 2, 3]); let mut stream2 = stream::iter(vec![4, 5, 6]); let next = tokio::select! { v = stream1.next() => v.unwrap(), v = stream2.next() => v.unwrap(), }; assert!(next == 1 || next == 4); } Collect the contents of two streams. In this example, we rely on pattern matching and the fact that stream::iter is "fused", i.e. once the stream is complete, all calls to next() return None. use tokio::stream::{self, StreamExt}; #[tokio::main] async fn main() { let mut stream1 = stream::iter(vec![1, 2, 3]); let mut stream2 = stream::iter(vec![4, 5, 6]); let mut values = vec![]; loop { tokio::select! { Some(v) = stream1.next() => values.push(v), Some(v) = stream2.next() => values.push(v), else => break, } } values.sort(); assert_eq!(&[1, 2, 3, 4, 5, 6], &values[..]); } Using the same future in multiple select! expressions can be done by passing a reference to the future. Doing so requires the future to be Unpin. A future can be made Unpin by either using Box::pin or stack pinning. Here, a stream is consumed for at most 1 second. use tokio::stream::{self, StreamExt}; use tokio::time::{self, Duration}; #[tokio::main] async fn main() { let mut stream = stream::iter(vec![1, 2, 3]); let mut delay = time::delay_for(Duration::from_secs(1)); loop { tokio::select! { maybe_v = stream.next() => { if let Some(v) = maybe_v { println!("got = {}", v); } else { break; } } _ = &mut delay => { println!("timeout"); break; } } } } Joining two values using use tokio::sync::oneshot; #[tokio::main] async fn main() { let (tx1, mut rx1) = oneshot::channel(); let (tx2, mut rx2) = oneshot::channel(); tokio::spawn(async move { tx1.send("first").unwrap(); }); tokio::spawn(async move { tx2.send("second").unwrap(); }); let mut a = None; let mut b = None; while a.is_none() || b.is_none() { tokio::select! { v1 = (&mut rx1), if a.is_none() => a = Some(v1.unwrap()), v2 = (&mut rx2), if b.is_none() => b = Some(v2.unwrap()), } } let res = (a.unwrap(), b.unwrap()); assert_eq!(res.0, "first"); assert_eq!(res.1, "second"); }
https://docs.rs/tokio/0.2.22/tokio/macro.select.html
2020-08-03T12:38:02
CC-MAIN-2020-34
1596439735810.18
[]
docs.rs
Table of Contents Product Index Three different room models of mirrored walls, floors, and ceilings, as well as two different stage props. Each room and stage contains different lighting elements that reflect into infinity for an exciting, bright, colorful render. The product includes six (6) preloaded scenes with lighting and render settings included. Simply load a scene, add a character, and then choose your light colors from the included 12 light shader presets. Scenes also include point lights which can be adjusted for intensity or moved around to better light your.
http://docs.daz3d.com/doku.php/public/read_me/index/63357/start
2020-08-03T11:58:16
CC-MAIN-2020-34
1596439735810.18
[]
docs.daz3d.com
Require vulnerability approval As an Organization Administrator, you can require administrative approval when closing vulnerabilities in your organization. To configure this: In the user menu, select Policy management > Vulnerability management > Vulnerability behavior. Select the box next to Require administrator approval when closing vulnerabilities. Choose the statuses and severities of vulnerabilities that should automatically go into a Pending state when a user moves to close them. When a user requests to close any qualifying vulnerabilities, Contrast will notify you that your review is needed. To qualify for administrative approval, both a status and severity that you select in this configuration must apply to the vulnerability being closed. Each vulnerability status will remain pending until you submit your review of the closure. If you deny the closure of a vulnerability, you must provide a reason for denial. Once confirmed, your feedback appears in the vulnerability's Discussion tab. If you disable the feature, any pending closures are automatically approved. Note While in a Pending state, the vulnerability's previous status still applies for the purpose of organizational reports and statistics.
https://docs.contrastsecurity.com/en/organization-vulnerability-approval.html
2020-08-03T12:44:03
CC-MAIN-2020-34
1596439735810.18
[]
docs.contrastsecurity.com
Vulnerability trend reports Vulnerability management is a vital responsibility of any security team. Use the Vulnerability Trend reports to recognize the vulnerabilities your applications face and how well they're being managed so that you have a better understanding of your security posture. Access data Select Reports in the User menu to go to the Vulnerability Trend dashboard. Click the View link to see the graphs in more detail. Select New to see a graph of new vulnerabilities. Select Total to see a graph of all reported vulnerabilities compared to all remediated vulnerabilities. Each black data point represents the total number of Suspicious, Confirmed and Reported vulnerabilities for that date. Each green data point represents the total number of vulnerabilities marked as Not A Problem, Remediated or Fixed. Hovering over each data point generates a tooltip with status breakdowns. Filter vulnerabilities: Each report defaults to all applications, servers and rules, but you can filter vulnerabilities by clicking in the fields above the graph. The following table outlines the categories that you can use to create a custom report. Save reports You can save filter criteria to recall any customized report at a later time. Saved reports are at the User level, so each of you have your own defined list of saved vulnerability trend reports. You can edit or delete these reports at any time. To save a report view, click the star icon at the top right of the report page. This generates a popup with a field to name the report. Once saved, the named report appears next to the Vulnerability Trend heading with a dropdown menu. Each time you come to the Vulnerability Trend page, the menu shows all of your saved reports as well as an option to Start a new report. Rename reports: When viewing a saved report, hover over the star icon to generate a Manage Report tooltip. Click the icon to produce a popup with a field to rename the report and buttons to Cancel, Remove or Save. Edit and remove reports: If you change filter options while viewing a saved report, the star icon changes to an unsaved state and Edited appears next to the report name. Click the icon to generate a popup menu to Save Existing or Save As New. Choose Save Existing to update the saved report name with the current filters and remove the Edited status. Choose Save As New to save the report view with the current filters as a new report under a different name. Click Remove to permanently delete the saved report that you're currently viewing. Contrast automatically takes you to the default Vulnerability Trend page view and removes the report name from the dropdown menu. Start new reports: To clear unsaved edits to an existing report and start over with the report defaults, choose the Start a new report option in the dropdown menu. The report name changes to New Report. Manage reports: When you've created more than five saved reports, a Manage link appears within the Saved Reports dropdown. Click the link to go to the Manage Saved Reports dialog. Select the checkbox next to each report that you want to remove or use the Select All checkbox. To rename a report in the dialog, click the report name and edit it inline. You can also use the search field to find reports. Export reports Create a timestamped PDF report of the Vulnerability Trend to capture a snapshot of your vulnerability management by clicking the Export icon in the upper right hand corner of the page. Contrast immediately generates the report and prompts you to download when it’s ready. Each PDF report includes a summary of the variables included in your customized view, the trend graphic, and a table of the metrics and breakdowns of each data point.
https://docs.contrastsecurity.com/en/vulnerability-trend-reports.html
2020-08-03T12:11:22
CC-MAIN-2020-34
1596439735810.18
[]
docs.contrastsecurity.com
# How to deploy a MeiliSearch instance on DigitalOcean # Create an out-of-the-box MeiliSearch # 1. Create a new "droplet" A "droplet" is a set of resources, as a Virtual Machine, or a Server, in which you can run your own applications. In any DigitalOcean page, when you are logged in, you will find a menu in the upper-right corner. Click on "Create" -> "Droplets". # 2. Select MeiliSearch snapshot By default, DigitalOcean will display the "distributions" tab. Select the "Marketplace" tab and search for "meili". Select it. # 3. Select your plan Select your plan. Plans start at $5 (click on "See all plans" for more options). Memory-optimized options will give you better results for a production environment on big datasets. # 4. Select a region for your droplet Select the region where you want to deploy your droplet. Remember, the closer you are to your users or customers, the better will be their search experience with MeiliSearch. # 5. Add your ssh key Select your SSH key in order to be able to connect to your droplet later. If you don't see your SSH key add yours to your account. If you need help with this, visit this link You can also set a password for root user if you prefer this authentication method. # 6. Choose your droplet name and tags Here you can select the name that will be visible everywhere in your DigitalOcean account. Choose wisely! Tags are a very good method to know who created resources, and for organizing resources or projects. Try to always add some tags to make clear what are the server purposes. # 7. Finally click on Create Droplet # 8. Your MeiliSearch is running (with no config) Instance creation in progress... ... done! # 9. Test MeiliSearch. Copy the public IP address: Paste it in your browser. If this screen is shown, your MeiliSearch is now ready! # Configure production settings in your MeiliSearch Droplet Configuring your MeiliSearch from a DigitalOcean droplet is very straightforward. Establish an SSH connection with your droplet and a script will guide you through the process. # 1. Make your domain name point to your droplet If you want to use your own domain name (or sub-domain), add A record in your domain name provider account. This should work out of the box. Your domain should be usable for your MeiliSearch. # 2. Set API KEY and SSL (HTTPS) Meilisearch is running with a development configuration. It means that you haven't set up an API KEY (anyone can read/write from your MeiliSearch) and you aren't using HTTPS yet. But no worries, the configuration process is automated and very simple. Just connect via SSH to your new MeiliSearch Droplet and answer a few questions: # 2.1. Run the configuration script Open a terminal and start a new SSH connection with the IP you got from DigitalOcean. Write in your terminal ssh [email protected]<your-ip-address> and press Enter to establish connection: Write yes and press Enter to accept the authentication process. A script will run automatically, asking for your settings and desired configuration. If you want to run this script again later, you can do so by typing: sh /var/opt/meilisearch/scripts/first-login/000-set-meili-env.sh # 3. Enjoy your ready-to-use MeiliSearch droplet Enjoy!
https://docs.meilisearch.com/resources/howtos/digitalocean_droplet.html
2020-08-03T11:59:29
CC-MAIN-2020-34
1596439735810.18
[array(['/digitalocean/01.create.png', 'Create droplet'], dtype=object) array(['/digitalocean/02.marketplace.png', 'Marketplace'], dtype=object) array(['/digitalocean/03.select-plan.png', 'Select plan'], dtype=object) array(['/digitalocean/04.select-region.png', 'Select region'], dtype=object) array(['/digitalocean/05.add-ssh-key.png', 'Add ssh key'], dtype=object) array(['/digitalocean/06.droplet-name.png', 'Droplet name'], dtype=object) array(['/digitalocean/06.add-tags.png', 'Add tags'], dtype=object) array(['/digitalocean/07.create-droplet.png', 'Create droplet'], dtype=object) array(['/digitalocean/08.creating.png', 'Creating'], dtype=object) array(['/digitalocean/08.created-ip.png', 'Created'], dtype=object) array(['/digitalocean/09.copy-ip.png', 'Copy IP'], dtype=object) array(['/digitalocean/09.test-meili.png', 'Test MeiliSearch'], dtype=object) array(['/digitalocean/11.domain-a-record.png', 'Domain to MeiliSearch'], dtype=object) array(['/digitalocean/11.working-domain.png', 'Domain to MeiliSearch'], dtype=object) array(['/digitalocean/12.open-terminal-ssh.png', 'Terminal ssh'], dtype=object) array(['/digitalocean/13.finish.png', 'Enjoy'], dtype=object)]
docs.meilisearch.com
Design by Contract for statecharts¶ About Design by Contract¶ Design by Contract (DbC) was introduced by Bertrand Meyer and popularised through his object-oriented Eiffel programming language. Several other programming languages also provide support for DbC. The main idea is that the specification of a software component (e.g., a method, function or class) is extended with a so-called contract that needs to be respected when using this component. Typically, the contract is expressed in terms of preconditions, postconditions and invariants.. These specifications are referred to as “contracts”, in accordance with a conceptual metaphor with the conditions and obligations of business contracts. — Wikipedia DbC for statechart models¶ While DbC has gained some amount of acceptance at the programming level, there is hardly any support for it at the modeling level. Sismic aims to change this, by integrating support for Design by Contract for statecharts. The basic idea is that contracts can be defined on statechart componnents (states or transitions), by specifying preconditions, postconditions, and invariants on them. At runtime, Sismic will verify the conditions specified by the contracts. If a condition is not satisfied, a ContractError will be raised. More specifically, one of the following 4 error types wil be raised: PreconditionError, PostconditionError, or InvariantError. Contracts can be specified for any state contained in the statechart, and for any transition contained in the statechart. A state contract can contain preconditions, postconditions, and/or invariants. The semantics for evaluating a contract is as follows: - - For states: - - state preconditions are checked before the state is entered (i.e., before executing on entry), in the order of occurrence of the preconditions. - state postconditions are checked after the state is exited (i.e., after executing on exit), in the order of occurrence of the postconditions. - state invariants are checked at the end of each macro step, in the order of occurrence of the invariants. The state must be in the active configuration. - - For transitions: - - the preconditions are checked before starting the process of the transition (and before executing the optional transition action). - the postconditions are checked after finishing the process of the transition (and after executing the optional transition action). - the invariants are checked twice: one before starting and a second time after finishing the process of the transition. Defining contracts in YAML¶ Contracts can easily be added to the YAML definition of a statechart (see Defining statecharts in YAML) through the use of the contract property. Preconditions, postconditions, and invariants are defined as nested items of the contract property. The name of these optional contractual conditions is respectively before (for preconditions), after (for postconditions), and always (for invariants): contract: - before: ... - after: ... - always: ... Obviously, more than one condition of each type can be specified: contract: - before: ... - before: ... - before: ... - after: ... A condition is an expression that will be evaluated by an Evaluator instance (see Include code in statecharts). contract: - before: x > 0 - before: y > 0 - after: x + y == 0 - always: x + y >= 0 Here is an example of a contracts defined at state level: statechart: name: example root state: name: root contract: - always: x >= 0 - always: not active('other state') or x > 0 If the default PythonEvaluator is used, it is possible to refer to the old value of some variable used in the statechart, by prepending __old__. This is particularly useful when specifying postconditions and invariants: contract: always: d > __old__.d after: (x - __old__.x) < d See the documentation of PythonEvaluator for more information. Executing statecharts containing contracts¶ The execution of a statechart that contains contracts does not essentially differ from the execution of a statechart that does not. The only difference is that conditions of each contract are checked at runtime (as explained above) and may raise a subclass of ContractError. from sismic.interpreter import Interpreter, Event from sismic.io import import_from_yaml statechart = import_from_yaml(filepath='examples/elevator/elevator_contract.yaml') # Make the run fails statechart.state_for('movingUp').preconditions[0] = 'current > destination' interpreter = Interpreter(statechart) interpreter.queue('floorSelected', floor=4) interpreter.execute() Here we manually changed one of the preconditions such that it failed at runtime. The exception displays some relevant information to help debug: Traceback (most recent call last): ... sismic.exceptions.PreconditionError: PreconditionError Object: BasicState('movingUp') Assertion: current > destination Configuration: ['active', 'floorListener', 'movingElevator', 'floorSelecting', 'moving'] Step: MicroStep(transition=Transition('doorsClosed', 'movingUp', event=None), entered_states=['moving', 'movingUp'], exited_states=['doorsClosed']) Context: - current = 0 - destination = 4 - doors_open = False If you do not want the execution to be interrupted by such exceptions, you can set the ignore_contract parameter to True when constructing an Interpreter. This way, no contract checking will be done during the execution.
https://sismic.readthedocs.io/en/latest/contract.html?utm_source=stately&utm_medium=email&utm_campaign=design-by-contract-or-how-to-accidentally-order
2020-08-03T11:19:50
CC-MAIN-2020-34
1596439735810.18
[]
sismic.readthedocs.io
@Generated(value="OracleSDKGenerator", comments="API Version: 20160918") public final class GenerateAutonomousDatabaseWalletDetails extends Object Details to create and download an Oracle Autonomous Database wallet. Note: Objects should always be created or deserialized using the GenerateAutonomousDatabaseWalletDetails.Builder. This model distinguishes fields that are null because they are unset from fields that are explicitly set to null. This is done in the setter methods of the GenerateAutonomousDatabaseWallet={"generateType","password"}) @Deprecated public GenerateAutonomousDatabaseWalletDetails(GenerateAutonomousDatabaseWalletDetails.GenerateType generateType, String password) public static GenerateAutonomousDatabaseWalletDetails.Builder builder() Create a new builder. public GenerateAutonomousDatabaseWalletDetails.GenerateType getGenerateType() The type of wallet to generate. SINGLE is used to generate a wallet for a single database. ALL is used to generate wallet for all databases in the region. public String getPassword() The password to encrypt the keys inside the wallet. The password must be at least 8 characters long and must include at least 1 letter and either 1 numeric character or 1 special character. public Set<String> get__explicitlySet__() public boolean equals(Object o) equalsin class Object public int hashCode() hashCodein class Object public String toString() toStringin class Object
https://docs.cloud.oracle.com/en-us/iaas/tools/java/1.17.4/com/oracle/bmc/database/model/GenerateAutonomousDatabaseWalletDetails.html
2020-08-03T13:10:55
CC-MAIN-2020-34
1596439735810.18
[]
docs.cloud.oracle.com
Welcome Automation in Mailchimp - 1 Create a new campaign and choose "Automated Email". On the next page, there is an option to create a "Welcome new subscribers" automation, pick this and give it a name and select which email list you want to send this to. Once created, you need to switch over to use the advanced settings by clicking "Use advanced settings" at the top just below the automation title. This will allow you to set more advanced segment conditions. - 2 Sometimes, Mailchimp can delay their notification to Coupon Carrier when a new subscriber is added. This can cause the welcome email to be sent before Coupon Carrier has added the unique code to the subscriber, this would result in an email without a code in it. To prevent this from happening, you should add a segment filter to the automation with the condition to only send it if the Coupon field isn't blank. This will cause the automation to wait until the field isn't blank before it will be sent.. You can't edit an automated email that is currently sending. - to finalize the configuration.<<
https://docs.couponcarrier.io/article/50-how-to-send-unique-codes-to-new-mailchimp-subscribers
2020-08-03T12:38:32
CC-MAIN-2020-34
1596439735810.18
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/595227fa0428637ff8d41632/images/5f15789b2c7d3a10cbaaf93b/file-awjnKTmUrb.gif', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/595227fa0428637ff8d41632/images/5f15800b04286306f80716b1/file-0UlQbtxVar.gif', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/595227fa0428637ff8d41632/images/5f1580882c7d3a10cbaaf98c/file-ZmxHCvdiCa.gif', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/595227fa0428637ff8d41632/images/5bd190982c7d3a01757a624e/file-BH8HhRStGF.png', None], dtype=object) ]
docs.couponcarrier.io
Stitch SOP Summary[edit] The Stitch SOP is used to stretch two curves or surfaces to cover a smooth area. It can also be used to create certain types of upholstered fabrics such as cushions and parachutes. If a second input is given, it must contain one surface that the primitives in the first input can stitch to. The left input can contain either faces or surfaces; in either case, each primitive in the first input is stitched to a parametric area of the surface in the second input in such a way that the parametric area allocated to each primitive is the same and the size of all areas added together equals the parametric range specified in the R Width (see below). Please refer to the Align SOP for a discussion of "left" and "right" primitives as well as the option of an auxiliary input. Parameters - Page Group group - Which primitives to stitch. If blank, it stitches the entire input. Accepts patterns, as described in Pattern Matching. Stitch stitchop - ⊞ - Stitches sub-groups of n primitives or patterns of primitives. - All Primitives all- - Groups of N Primitives group- - Skip Every Nth Primitive skip- N inc - The value entered for N determines the pattern of primitives stitched. Wrap Last to First loop - If enabled, it connects the beginning of the first primitive in the left input to the end of the last primitive in the same input. If only one primitive exists, its ends will be stitched together. Direction dir - ⊞ - Allows stitching along either the U or V parametric direction. - in U ujoin- - in V vjoin- Tolerance tolerance - This parameter minimizes modification to the input sources. A smaller value creates less modification. Bias bias - Determines which primitive remains unaffected. The values go from 0 - 1, where 0 - first, and 1 - last. Left UV leftuv - ⊞ - Point on each left / right primitive at which to begin / end the stitch. leftuv1- leftuv2- Right UV rightuv - ⊞ - Point on each left / right primitive at which to begin / end the stitch. rightuv1- rightuv2- LR Width lrwidth - ⊞ - The first value represents the width of the left stitch. The second value represents the width of the right stitch. lrwidth1- lrwidth2- Stitch dostitch - If selected, move a single row from each primitive to coincide. Tangent dotangent - If selected, modifies neighbouring rows on each primitive to create identical slopes at the given rows. Sharp Partials sharp - If selected, creates sharp corners at the ends of the stitch when the stitch partially spans a primitive. Fixed Intersection fixed - When the tangent option is on, this option allows some flexibility as to which side of each slope is modified. LR Scale lrscale - ⊞ - Use this parameter to control the direction and position of the tangential slopes. lrscale1- lrscale2-.
https://docs.derivative.ca/Stitch_SOP
2020-08-03T12:48:59
CC-MAIN-2020-34
1596439735810.18
[]
docs.derivative.ca
Organize business logic in your app with components Important Some of the functionality described in this release plan has not been released. Delivery timelines may change and projected functionality may not be released (see Microsoft policy). Learn more: What's new and planned Feature details Use the Power Apps formula language to create reusable business logic across apps tied to data events. Reusable logic enables makers to build apps with complex business logic quickly, using the Excel-like language in reusable components within and across apps.
https://docs.microsoft.com/en-us/power-platform-release-plan/2020wave2/power-apps/organize-business-logic-app-components
2020-08-03T13:28:15
CC-MAIN-2020-34
1596439735810.18
[]
docs.microsoft.com
BuddyBeacon: Watching once, or can check their location repeatedly. Locate on web site To locate beacons on the web site just add the buddy. To change the refresh interval when viewing at the web site click on the green clock symbol in the very bottom left of the window. You can also set your Timezone from the dropdown then refresh the page. Beacon Controls Beacons and trackers are controlled from Plus "+" button, BuddyBeacon menu. The menu contains these additional items that relate to watch\track: - Watch/track enters watching mode, where Viewranger will regularly poll for the server for the latest position of the selected buddy. - Locate now requests a one-time update of a buddies position. Locate now To locate an individual buddy. You can also locate an individual buddy by going to the buddy list, highlighting the buddy, then long press and choosing 'Locate now'. Note: Locate Now option does not require Watch/track to be switched on. Watch / track To watch an individual buddy and track it over time. You will be shown a list of buddies and trackers to choose from. It includes an item to add a new buddy or tracker. You will be asked to choose the interval to report on the beacon locations. You can also watch a list of buddies by going to the buddy list, highlighting the buddy, then long press and choosing Watch/track. You can use the Options, BuddyBeacon, Stop watch / track to stop the interval reporting for the location. Note: if the located buddy is outside the bounds of the active coordinate/country system, ViewRanger does not change to show that country - you need to do that yourself using the Organizer's Maps list. Buddy select list Selecting any beacon control option will prompt you to select the buddy you wish to watch, Locate or Navigate to. - If there are no buddies, you will be prompted to add one. - Or, you can select Add buddy or tracker to do this manually. You will need the username (case sensitive) and PIN for the buddy in order to successfully request positions. View Buddy on Map Buddies are displayed on the map using the selected icon. - The selection text shows the buddies location and the time of the most recent position update. - A trail of recent positions may also be shown. If you tap on a buddy from the Map view, you will see the Buddy context menu. - Standard options for viewing or editing details, entering watch mode, polling for a position now, etc. - Edit gives you the option to change the Buddy's icon or PIN - use this if your Buddy changes there PIN to re-establish access to their positions. - Navigate to buddy/tracker will enter Navigation mode for the selected buddy, as well as entering Watch/track mode for that buddy. Buddy list To access the Buddy Beacon list view, tap the menu tab (in green bar), BuddyBeacon.. - If ViewRanger has a position for a buddy, it will show the time of the most recent position. - Buddy positions are not remembered after exiting Viewranger. - Tap on a Buddy to view the Buddy context menu. The options here are similar to the Buddy context menu from the map screen. - The check-boxes indicate which Buddies will be included in watch mode. From this screen, multiple buddies can be selected for watching. - Press New / Plus button (Options button in old software), BuddyBeacon, Watch/track or Locate now to enter watch mode or locate your selected buddies.
https://docs.viewranger.com/article/35-watching-a-beacon-android-and-ios
2020-08-03T12:37:07
CC-MAIN-2020-34
1596439735810.18
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5cb094472c7d3a392f9cfa3c/images/5d271b64042863478674c58e/file-XnL3Cq9Zm1.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5cb094472c7d3a392f9cfa3c/images/5d271b042c7d3a2ec4bebb05/file-nbbqlOYsZH.png', None], dtype=object) ]
docs.viewranger.com
Sonatype Nexus IQ plugin The Release Sonatype Nexus IQ plugin is an Release plugin that enables the evaluation of a binary within the Nexus IQ server. Important Place the latest Release Sonatype Nexus IQ plugin jar file under the plugins directory in XLR and restart the XLR server. Link to download the Sonatype Nexus IQ CLI jar file: NexusIQCLI_jar Features This plugin creates the Evaluate Binary task that enables the evaluation of a binary within the Nexus IQ server. Requirements The Sonatype Nexus IQ plugin requires the following: - A Nexus IQ Server running - A Sonatype Nexus IQ CLI jar file Set up a connection to the Nexus IQ Server To set up a connection to the Nexus IQ server: - In the top navigation bar, click Settings > Shared configuration Under configurations, beside Nexusiq: Server, click - In URL, enter the url where the Nexus Iq server is running - In Username, enter the username of the server - In Password, enter the password of the server - In CLI jar, enter the path of the CLI jar file - To test the connection, click Test - To save the configuration, click Save Start a release for Release Sonatype Nexus IQ plugin - Create a folder for sonatype-nexus-iq - Add a template for the created folder - In the template, add the Evaluate Binary task which comes under Nexusiq In the Evaluate Binary task, provide: - The location of the binary to be evaluated, along with the access username and password for the location if needed - The name of the Application ID (Public ID) - The stage of the release to execute the binary - Create a new release in the template - Start the release by clicking the Start Release button Tile and Dashboard configuration - On the completed release, select the Release Dashboard option from the dropdown list in the Show field which is present on the left side of the screen - Click the Configure Dashboard button on the right side of the screen - Click the Add Tiles button and add the NexusIQ tile from the available list - Click the Configure option present in the NexusIQ tile. - In the Tile Configuration window, select the Nexus Iq server, Application Id, Security level label and click Save - After clicking the Save button, the NexusIQ dashboard appears on the screen showing the details in the form of a dashboard Report creation To create a report: - Click Reports - Go to Release audit report - Click the button Generate new report - Select Time period - Go to Preview results button - Click the Generate report button - Download the generated report - Extract it and verify the extracted reports On Success: In the extracted folder’s root directory, there is an overall report and you can find reports for individual releases in the reports folder. In individual release reports, for plugins without CoC information, the Security and Compliance tab will not appear; but for plugins with CoC, you can see the tab. On Failure: The created report for a failed task should show the Compliance check as failed. Release Notes Release Sonatype Nexus IQ plugin 9.7.0 Bug fixes - [XLINT-1413] - Fixed evaluate binary task’s execution behaviour Release Sonatype Nexus IQ plugin 9.6.0 - Added compatibility with Release 9.6.0
https://docs.xebialabs.com/v.9.7/release/how-to/setup-sonatype-nexus-iq/
2020-08-03T12:38:31
CC-MAIN-2020-34
1596439735810.18
[array(['/static/XLR_NEXUS_IQ_CONNECTION-50211f8b2126e34ab3501fde71785a67.png', 'XLR_NEXUS_IQ_CONNECTION'], dtype=object) array(['/static/Evaluate_Binary_Release-3de635d642f76df4bc3dd9a8ee1a0b07.png', 'Evaluate_Binary_Release'], dtype=object) array(['/static/Evaluate_Binary_Running_Release-c9ed43cc301f17b0581674fb1d1a824d.png', 'Evaluate_Binary_Running_Release'], dtype=object) array(['/static/Tile-8c6e774dc208170b78d4aed5d4e5fcb7.png', 'Tile Configuration'], dtype=object) array(['/static/Dashboard-82a3e3ec14c953ab22104bbd06ffd374.png', 'Dashboard Configuration'], dtype=object) array(['/static/ReportGeneration-1260999586865b90e755f20d75c177c3.png', 'Report_Generation'], dtype=object) array(['/static/Report_xl-0100290528ead72de8343295b4acdad9.png', 'Report_xl'], dtype=object) array(['/static/Failure_Report-1ad185717a421fc568328818c82a4ffc.png', 'Failure_Report'], dtype=object) array(['/static/Failure_Report_xl-580252d0a392e50d7fb057ee350bcb07.png', 'Failure_Report_xl'], dtype=object) ]
docs.xebialabs.com
This chapter describes how to add search operations and lexicon analysis to your Server-Side JavaScript modules and extensions using the JSearch library module. This chapter includes the following sections: This chapter provides background, design patterns, and examples of the JSearch library module. For the function signatures and descriptions, see the JSearch documentation under JavaScript Library Modules in the MarkLogic Server-Side JavaScript Function Reference. You can also use the Node.js Client API to integrate search operations and lexicon analysis into your client-side code. For details, see the Node.js Application Developer's Guide. This section provides a high level overview of the features and design patterns of the JSearch library. This section covers the following topics: You can use the JSearch library to perform most of the query operations available through the cts built-in functions and the Search API, including the following: cts:parse, and cts queries. Libraries can be imported as JavaScript MJS modules. This is the preferred import method. The following table provides an overview of the key top level JSearch methods. All these methods are effectively query builders. You can chain additional methods to them to refine and produce results. For details, see Query Design Pattern. The API also includes helper functions, not listed here, for constructing complex inputs such as lexicon references, facet definitions, and heatmap definitions. For a complete list of functions, see the MarkLogic Server-Side JavaScript Function Reference. The top level JSearch operations, such as document search, lexicon value queries, and lexicon tuple queries use a pipeline pattern for defining the query and customizing results. The pipeline mirrors steps MarkLogic performs when evaluating a query. The pipeline stages vary by operation, but can include steps such as query criteria definition, result ordering, and result transformations. Building and evaluating a query consists of the following steps: If you omit all the pipeline stages in Step 2, then you retrieve the default slice from all selected resources. For example, all the documents in the database or all values or tuples in the selected lexicon(s). Consider the case of a document search. The following example (1) selects documents as the resource; (2) defines the query and customizes the result using the where, orderBy, slice, and map pipeline stages; (3) specifies the returnQueryPlan option using the withOptions method; and then (4) evaluates the assembled query and gets results. import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.documents() // 1. resource selection .where(cts.parse('title:california', // 2. query defn pipeline {title: cts.jsonPropertyReference('title')})) .orderBy('price') // . .slice(0,5) // . .map({snippet: true}) // . .withOptions({returnQueryPlan: true}) // 3. additional options .result() // 4. query evaluation The query definition pipeline in this example uses the following stages: For comparsion, below is a JSearch values query. Observe that it follows the same pattern. In this case, the selected resource is the values in a range index on the price JSON property or XML element. import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.values('price') // 1. resource selection .where(cts.parse('by:"mark twain"', // 2. query defn pipeline {by: cts.jsonPropertyReference('author')})) .orderBy('item','descending') // . .slice(0,20) // . .withOptions({qualityWeight: 2}) // 3. additional options .result() // 4. query evaluation The query definition pipeline in this values query example uses the following stages: The query definition pipeline is realized through a call chain, as shown in the examples. All pipeline stages are optional, but the order is fixed. The table below summarizes the pipeline stages available for querying each resource type. The stage names are also JSearch method names. Note that two pipelines are available for values and tuples queries: one for retrieving values or tuples from lexicons and another for computing aggregates over the values or tuples. Results can be returned as values (typically, an array) or as an Iterable. The default is values. For example, the default output from a document search has the following form: { results: [resultItems], estimate: totalEstimatedMatches } However, if you request an Iterable object by passing 'iterator' to the result method, then you get the following: { results: iterableOverResultItems, estimate: totalEstimatedMatches } When you request iterable results by calling results('iterator') on the various JSearch APIs, you receive a Sequence in some contexts and a Generator in others. For more information on these constructs, see Sequence in the JavaScript Reference Guide and the definition of Generator in the JavaScript standard: The JSearch library module is primarily designed for JavaScript developers writing MarkLogic applications that initiate document searches and lexicon queries on the server. The same capabilities are available through other server-side interfaces, such as the cts built-in functions and the Search API, but JSearch offers the following advantages for a JavaScript developer: In addition, the design patterns, query styles, and configuration options are similar to those used by the Node.js Client API. Thus, developers creating multi-tier JavaScript applications will find it easy to move between client (or middle) and server tiers when using JSearch. To learn more about the Node.js Client API, see the Node.js Application Developer's Guide. You can use the JSearch API in conjunction with the cts built-in functions, in many contexts. For example: ctsquery constructors to create input queries usable with a JSearch-based document search. For details, see Using cts.query Constructors. cts.referenceconstructors. withOptionsmethod. For details, see Using Options to Control a Query. All the examples in this chapter can be run using Query Console. To configure the sample database and load the sample documents, see the instructions in Preparing to Run the Examples. For more information about Query Console, see the Query Console User Guide or the Query Console help. If your application primarily works with documents in one or more collections, you can use the collections method to create a top level jsearch object that implicitly limits operations by collection. For example, suppose your application is operating on documents in a collection with the URI classics. Including a cts.collectionQuery('classics') in all your query operations can be inconvenient. Instead, use the collections method to create a scoped search object through which you can perform all JSearch operations, as shown below: import * as jsearch from '/MarkLogic/jsearch.mjs'; const classics = jsearch.collections('classics'); // implicitly limit results to matches in the 'classics' collection classics.documents() .where(cts.parse('california')) .result() You can use the resulting object everywhere you can use the object returned by the require that brings the JSearch library into scope. You can scope to one or many collections. When you specify multiple collections, the implicit collection query matches documents in any of the collections. For example: import * as jsearch from '/MarkLogic/jsearch.mjs'; // Work with documents in either the "novels" or "poems" collection const books = jsearch.collections(['novels','poems']); The collection scope is ignored on operations for which it makes no sense, such as when constructing a lexicon reference using a helper function like jsearch.elementLexicon. On operations where scope matters, such as documents, values, and words, the implicit cts.collectionQuery is added to a top-level cts.andQuery on every where clause. For more details, see jsearch.collections . To perform a document search, use the jsearch.documents method and the design pattern described in Query Design Pattern. This section outlines how to perform a document search. The search features touched on here are discussed in more detail in the remainder of this chapter. Bring the JSearch library module functions into scope by including a import statement similar to the following in your code. import * as jsearch from '/MarkLogic/jsearch.mjs'; A document search begins by selecting documents as the resource you want to work with by calling the top level documents method. You can invoke this method either on the object created by the require statement, or on a collection-scoped instantiation. // Work with all documents jsearch.documents().where(cts.parse('cat')).result() ... // Work with documents in collections 'coll1' and 'coll2' const myColls = jsearch.collections([coll1,coll2]); myColls.documents().where(cts.parse('cat')).result() ... To learn more about working with collections, see Scoping Operations by Collection Build and execute your search following the pattern described in Query Design Pattern. The following table maps the applicable JSearch methods to the steps in the design pattern. Note that all the pipeline stages in Step 2 are optional, but you must use them in the order shown. For an example, see Example: Basic Document Search. The following is the most minimal JSearch document search, but it has the broadest scope in that it returns the default slice of all documents in the database. jsearch.documents().result() More typically, your search will include at least a where clause that defines the desired set of results. The where method accepts one or more cts.query objects as input and defines your search criteria. For example, the following query matches documents where the author property has the value Mark Twain: jsearch.documents() .where(jsearch.byExample({author: 'Mark Twain'})) .result() You can customize the results by adding orderBy, slice, map, and reduce stages to the operation. For example, you can suppress the search metadata, include snippets instead of (or in addition to) the full documents, extract just a portion of each matching document, or apply a custom content transformation. These and other features are covered elsewhere in this chapter. The following example matches documents that contain an author JSON property with the value Mark Twain, price property with a value less than10, and that are in the /books/ directory. Notice that the search criteria are expressed in several ways; for details, see Creating a cts.query. The search results contain at most the first 3 matching documents ( slice), ordered by the value of the title property ( orderBy). import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.documents() .where([ jsearch.byExample({author: 'Mark Twain'}), cts.parse('price LT 10', {price: cts.jsonPropertyReference('price')}), cts.directoryQuery('/books/')]) .orderBy('title') .slice(0,3) .result() This query produces output similar to the following when run against the documents and database configuration described in Preparing to Run the Examples. { "results": [ { "index": 0, "uri": "/books/twain3.json", "score": 16384, "confidence": 0.43934014439583, "fitness": 0.69645345211029, "document": { "title": "Adventures of Huckleberry Finn", "author": "Mark Twain", "edition": { "format": "paperback", "price": 8 }, "synopsis": "The adventures of Huck, a boy ..." } }, { "index": 1, "uri": "/books/twain1.json", "score": 16384, "confidence": 0.43934014439583, "fitness": 0.69645345211029, "document": { "title": "Adventures of Tom Sawyer", "author": "Mark Twain", "edition": { "format": "paperback", "price": 9 }, "synopsis": "Tales of mischief and adventure ..." } } ], "estimate": 2 } By default, the results include search metadata ( uri, score, confidence, fitness, etc.) and the full content of each matched document. You can also choose whether to work with the results embedded in the return value as a value or an Iterable. For example, by default the results are returned in an array: import * as jsearch from '/MarkLogic/jsearch.mjs'; const response = jsearch.documents() .where(jsearch.byExample({author: 'Mark Twain'})) .result(); // or .result('value') response.results.forEach(function (result) { // work with the result object }); By passing iterator as the input to the result method, you can work with the results as an Iterable instead: import * as jsearch from '/MarkLogic/jsearch.mjs'; const response = jsearch.documents() .where(jsearch.byExample({author: 'Mark Twain'})) .result('iterator'); for (const result of response.results) { // work with the result object } For more details, see the following topics: DocumentsSearchin the MarkLogic Server-Side JavaScript Function Reference This section describes the most common ways of creating a cts.query for defining query criteria. Most JSearch operations include a where clause that accepts one or more cts.query objects as input. For example, the documents, values, and tuples methods all return an object with a where method for defining query criteria. This section covers the following topics: The jsearch.byExample method enables you to build queries by modeling the structure of the content you want to match. It enables you to express your search in terms of documents that look like this. This section covers the following topics: JSearch.byExample() and search:by-example() take a query represented as an XML element for XML or as a JSON node or map for JSON and return a cts:query that can be used in any API that takes a cts:query including cts:search(), cts:uris() and the Optic where clause: jsearch.byExample({author: 'Mark Twain'}) Search criteria like the one immediately above are implicitly value queries with exact match semantics in QBE. The XQuery equivalent to the preceding JavaScript call is: import module namespace q = "" at "/MarkLogic/appservices/search/qbe.xqy"; q:by-example(<author>Mark Twain</author>) cts:element-value-query(fn:QName("","author"), "Mark Twain", ...) Search criteria like the jsearch.byExample() above are implicitly value queries with exact match semantics in QBE, so the query constructed with byExample above is equivalent to the following cts.query constructor call: // equivalent cts.query constructor call: cts.jsonPropertyValueQuery( 'author', 'Mark Twain', ['case-sensitive','diacritic-sensitive', 'punctuation-sensitive','whitespace-sensitive', 'unstemmed','unwildcarded','lang=en'], 1) QBE provides much of the expressive power of cts.query constructors. For example, you can use QBE keywords in your criteria to construct value, word, and range queries, as well as compose compound queries with logical operators. For a more complete example see Example: Building a Query With byExample. For details, see Searching Using Query By Example. The JSearch byExample method does not use the $response portion of a QBE. This and other QBE features, such as result customization, are provided through other JSearch interfaces. For details, see Differences Between byExample and QBE. The input to jsearch.byExample can be a JavaScript object, XML node, or JSON node. In all cases, the object or node can express either a complete QBE, as described in Searching Using Query By Example, or just the contents of the query portion of a QBE (the search criteria). For convenience, you can also pass in a document that encapsulates an XML or JSON node that meets the preceding requirements. You must use the complete QBE form of input if you need to specify the format or validate QBE flags. For example, all the following are valid inputs to jsearch.byExample: By default, a query expressed as JavaScript object or JSON node will match JSON documents and a query expressed as an XML node will match XML documents. You can use the format QBE flag to override this behavior; for details, see Scoping a Search by Document Type. You must use the XML node (or a corresponding document node wrapper) form to search XML documents that use namespaces as there is no way to define namespaces in the JavaScript/JSON QBE format. This example assumes the database contains documents with the following structure: { "title": "Tom Sawyer", "author" : "Mark Twain", "edition": { "format": "paperback", "price" : 9.99 } } To add similar data to your database, see Preparing to Run the Examples. The following query uses most of the expressive power of QBE and matches the above document. The top level properties in the query object passed to byExample are implicitly AND'd together, so all these conditions must be met by matching documents. Since the query includes range queries on a price property, the database configuration must include an element range index with local name price and type float. import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.documents() .where(jsearch.byExample({ "title": { "$value": "adventures of tom sawyer", "$exact": false }, "$near": [ { "author": { "$word": "mark" } }, { "author": { "$word": "twain" } } ], "$distance": 2, "edition": { "$or" : [ { "format": "paperback" }, { "format": "hardback" } ] }, "$and": [ {"price": { "$lt": 10.00 }}, {"price": { "$ge": 8.00 }} ] })) .result() If you run this query using the documents created by Preparing to Run the Examples, the above query should match one document. The following table explains the requirements expressed by each component of the query. Each of the subquery types used in this example is explored in more detail in Understanding QBE Sub-Query Types. If you examine the output from byExample, you can see that the generated cts.query is complicated and much more difficult to express than the QBE syntax. For more details, see Searching Using Query By Example. The byExample method of JSearch does not use all parts of a QBE. A full QBE encapsulates search criteria, results refinement, and other options. However, JSearch supports some QBE features through other interfaces like filter and map. If you pass a full QBE to byExample, only the $query, $format, and $validate properties are used. Similarly, if you use an XML QBE, only the query, format, and validate elements are used. When reviewing the QBE documentation or converting QBE queries from client-side code, keep the following differences and restrictions in mind: filtermethod instead of the QBE $filteredflag to enable filtered search. $filteredto avoid or defer index creation. withOptionsmethod instead of the QBE $scoreflag to select a scoring algorithm. $constraintor $datatypein your queries. mapmethod instead of the QBE $responseproperty to customize results. The following table contains a QBE on the left that uses several features affected by the differences listed above, including $filtered, $score, and $response. The JSearch example on the right illustrates how to achieve the same result by combining byExample with other JSearch features. Use cts.parse to create a cts.query from query text such as cat AND dog that a user might enter in a search text box. The cts.parse grammar is similar to the Search API default string query grammar. For grammar details, see Creating a Query From Search Text With cts:parse. For example, the following code matches documents that contain the word steinbeck and the word california, anywhere. import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.documents() .where(cts.parse('steinbeck AND california')) .result() You can use the cts.parse grammar to generate complex queries. The following table illustrates some simple query text strings with their equivalent cts.query constructor calls. You can also bind a keyword to a query-generating function that the parser uses to generate a sub-query when the keyword appears in a query expression. This feature is similar to using pre-defined constraint names in Search API string queries. You can use a built-in function, such as cts.jsonPropertyReference, or supply a custom function that returns a cts.query. For example, you can use a binding to cause the query text by:twain to generate a query that matches the word twain only when it appears in the value of the author JSON property. (In the cts.parse grammar, the colon (:) operator signifies a word query by default.) import * as jsearch from '/MarkLogic/jsearch.mjs'; // bind 'by' to the JSON property 'author' const queryBinding = { by: cts.jsonPropertyReference('author') }; // Perform a search using the bound name in a word query expression jsearch.documents() .where(cts.parse('by:twain', queryBinding)) .result(); You can also define a custom binding function rather than using a pre-defined function such as cts.jsonPropertyReference. For more details and examples, see Creating a Query From Search Text With cts:parse. You can build a cts.query by calling one or more cts.query constructor built-in functions such as cts.andQuery or cts.jsonPropertyRangeQuery. The constructors enable you to compose complex and powerful queries. For example, the following code uses a cts.query constructor built-in function to create a word query that matches documents containing the phrase mark twain in the value of the author JSON property. import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.documents() .where( cts.jsonPropertyWordQuery('author', 'mark twain')) .result(); Query constructor built-in functions can be either leaf constructors, such as the one in the above example, or composable constructors. A leaf constructor does not accept cts.query's as input, while a composable constructor does. You can use composable constructors to build up powerful, complex queries. For example, the following call creates a query that matches documents in the database directory /books that contain the phrase huck or the phrase tom in the title property and either have a format property with the value paperback or a price property with a value that is less than 10. cts.andQuery([ cts.directoryQuery('/books/', 'infinity'), cts.jsonPropertyWordQuery('title', ['huck','tom']), cts.orQuery([ cts.jsonPropertyValueQuery('format', 'paperback'), cts.jsonPropertyRangeQuery('price', '<', 10)]) ]) You can pass options to most cts.query constructor built-ins for fine-grained control of each portion of your search. For example, you can specify whether or not a particular word query should be case and diacritic insensitive. For details on available options, see the API reference documentation for each constructor. For more details on constructing cts.query objects, see Composing cts:query Expressions. Search facets provide a summary of the values of a given characteristic across a set of search results. For example, you could query an inventory of appliances and facet on the manufacturer names. Facets can also include counts. The jsearch.facets method enables you to generate search result facets quickly and easily. This section includes the following topics: Search facets can enable your application users to narrow a search by drilling down with search criteria presented by the application. For example, suppose you have an application that enables users to search bibliographic data on books. If the user searches for American authors, the application displays the search results, plus filtering controls that enable the user to narrow the results by author and/or media format. The filtering controls may include both a list of values, such as author names, and the number of items matching each selection. The following diagram depicts such an interaction. Search results are not shown; only the filtering controls are included due to space constraints. The greyed out items are just representative of how an application might choose to display unselected facet values. The filtering categories Author and Media Format represent facets. The author names and formats are values from the author and format facets, respectively. The numbers after each value represent the number of items containing that value. MarkLogic generates facet values and counts from range indexes and lexicons. Therefore, your database configuration must include a lexicon or index for any content feature you want to use as a facet source, such as a JSON property or XML element. Use the JSearch facet method to identify an index from which to source facet data; for details, see Creating a Facet Definition. Use the Jsearch facets method to generate facets from such definitions. Only facet data is returned by default, but you can optionally request matching documents as well; for details, see Retrieving Facets and Content in a Single Operation. The remainder of this section describes how to generate and customize facets in more detail. The primary interfaces for generating facets are the jsearch.facets and jsearch.facet methods. Use the facet method to create a FacetDefinition, then pass your facet definitions to the facets method to create a facet generation operation. As with other JSearch operations, facets are not generated until you call the result method. The following procedure outlines the steps for building a faceting operation. For a complete example, see Example: Generating Facets From JSON Properties. For example, the following call defines a facet labeled Author derived from a range index on the JSON property named author. The database must include a range index on author. jsearch.facet('Author', 'author') A facet definition can include additional configuration. For details, see Creating a Facet Definition. jsearch.facets([ jsearch.facet('Author', 'author'), jsearch.facet('MediaFormat', 'format')]) documentsclause to return document search results and contents along with the facets. By default, only the facet data is returned. For example: jsearch.facets([ jsearch.facet('Author', 'author'), jsearch.facet('MediaFormat', 'format')], jsearch.documents()) cts.queryobjects, just as for a document search. For example: jsearch.facets([ jsearch.facet('Author', 'author'), jsearch.facet('MediaFormat', 'format')]) .where(jsearch.byExample({price: {$lt: 15}})) jsearch.facets([ jsearch.facet('Author', 'author'), jsearch.facet('MediaFormat', 'format')]) .where(jsearch.byExample({price: {$lt: 15}})) .withOptions({maxThreads: 15}) resultmethod. For example: jsearch.facets([ jsearch.facet('Author', 'author'), jsearch.facet('MediaFormat', 'format')]) .where(jsearch.byExample({price: {$lt: 15}})) .result() For a complete example, see Example: Generating Facets From JSON Properties. For more details, see the following topics in the MarkLogic Server-Side JavaScript Function Reference: FacetDefinition FacetsSearch This example is a simple demonstration of generating facets. The example uses the sample documents and database configuration described in Preparing to Run the Examples. The example generates facets for documents that contain a price property with value less than 15 (jsearch.byExample({price: {$lt: 15}})). Since the search criteria is a range query, the database configuration must include a range index on price. Facets are generated for the matched documents from two content features: If your database is configured according to the instructions in Preparing to Run the Examples, then it already includes the indexes needed to run this example. The following query builds up and then evaluates a facet request. Facets are not generated until the result method is evaluated. import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.facets([ jsearch.facet('Author', 'author'), jsearch.facet('MediaFormat', 'format')]) .where(jsearch.byExample({price: {$lt: 15}})) .result() Running this query in Query Console produces the following output: {"facets":{ "Author": { "Mark Twain": 2, "John Steinbeck": 1 }, "MediaFormat": { "paperback": 3 }}} Notice that the facets property of the results contains a child property corresponding to each facet definition created by jsearch.facet. In this case, the documents that met the price < 15 criteria include two documents with an author value of Mark Twain and one document with an author value of John Steinbeck. Similarly, based on the format property, a total of 3 paperbacks meet the price criteria. If you add a documents query, you can retrieve facets and matched documents together. For details, see Retrieving Facets and Content in a Single Operation. The facets method accepts one or more facet definitions as input. Use the jsearch.facet method to create each facet definition. The simplest form of facet definition just associates a facet name with a reference to a JSON property, XML element, field or other index or lexicon. For example, the following facet definition associates the name Author with a JSON property named author. jsearch.facet('Author', 'author') However, you can further customize the facet using a pipeline pattern similar to the one described in Query Design Pattern. The table below describes the pipeline stages availble for building a facet definition.All pipeline stages are optional, can appear at most once, and must be used in the order shown. Most stages behave as they do when used with a values query; for details, see ValuesSearch in the MarkLogic Server-Side JavaScript Function Reference. By default, only facet data is returned from a facets request, and the data for each facet is an object containing facetValue:count properties. That is, the default output has the following form: {"facets": { "facetName1": { "facetValue1": count, ... "facetValueN": count, }, "facetNameN": { ... }, }} The facet names come from the facet definition. The facet values and counts come from the index or lexicon referenced in the facet definition. The following diagram shows the relationship between a facet definition and the facet data generated from it: For example, the following output was produced by a facets request that included two facet definitions, name Author and MediaFormat. For details on the input facet definitions, see Example: Generating Facets From JSON Properties. {"facets":{ "Author": { "Mark Twain": 2, "John Steinbeck": 1 }, "MediaFormat": { "paperback": 3 }}} The built-in reducer generates the per facet objects, with counts. If you do not require counts, you can use the map method to bypass the reducer and configure the built-in mapper to omit the counts. For example: import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.facets( jsearch.facet('Author', 'author').map({frequency: 'none'})) .where(cts.directoryQuery('/books/')) .result() Running this query on a database configured according to the instructions in Preparing to Run the Examples produces the following output: {"facets": { "Author": ["Mark Twain", "Robert Frost", "John Steinbeck"] }} If you include a documents call in your facets operation, then the output includes both facet data and the results of the document search. The output has the following form: { "facets": { property for each facet }, "documents": [ descriptor for each matched document ] } The documents array items are search result descriptors exactly as returned by a document search. They can include the document contents and search match snippets. For an example, see Example: Generating Facets From JSON Properties. You can pass 'iterator' to your result call to return a Sequence as the value of each facet instead of an object. For example: import * as jsearch from '/MarkLogic/jsearch.mjs'; const results = jsearch.facets(jsearch.facet('Author', 'author')) .where(cts.directoryQuery('/books/')) .result('iterator') const authors = []; for (const author of results.facets.Author) { authors.push(author) } authors ==> [{"Mark Twain":4, "Robert Frost":1, "John Steinbeck":3}] In this case, the returned Iterable contains only a single item: The object containing the value:count properties for the facet that is produced by the built-in reducer. However, if you use a mapper or a custom reducer, you can have more items to iterate over. For example, the following call chain configures the built-in mapper to return only the facet values, without counts, so returning an iterator results in a Sequence over each facet value (author name, here): import * as jsearch from '/MarkLogic/jsearch.mjs'; const results = jsearch.facets( jsearch.facet('Author', 'author').map({frequency: 'none'})) .where(cts.directoryQuery('/books/')) .result('iterator') const authors = []; for (const author of results.facets.Author) { authors.push(author) } authors ==> ["Mark Twain", "Robert Frost", "John Steinbeck"] If you use groupInto to group the values for a facet into buckets representing value ranges, then the value of the facet is either an object or an Iterable over the bucket descriptors. For example, suppose you generate facets on a price property and get the following values: {"facets":{ "Price": "8":1, "9":1, "10":1, "16":1, "18":2, "20":1, "30":1} }} You could add a groupInto specification to group the facet values into 3 price range buckets instead, as shown in the following query: jsearch.facets( jsearch.facet('Price','price') .groupInto([ jsearch.bucketName('under $10'), 10, jsearch.bucketName('$10 to $19.99'), 20, jsearch.bucketName('over $20') ])) .where(cts.directoryQuery('/books/')) .result(); Now, the generated facets are similar to the following: {"facets": { "Price": { "under $10": { "value": { "minimum": 8, "maximum": 9, "upperBound": 10 }, "frequency": 2 }, "$10 to $19.99": { "value": { "minimum": 10, "maximum": 18, "lowerBound": 10, "upperBound": 20 }, "frequency": 4 }, "over $20": { "value": { "minimum": 20, "maximum": 30, "lowerBound": 20 }, "frequency": 2 } } } } For details, see Grouping Values and Facets Into Buckets. As mentioned in Introduction to Facets, facet results include a count (or frequency) by default. You can use FacetDefinition.orderBy to sort the results for a given facet by this frequency. Including an explicit sort order in your facet definition changes the structure of the results. For example, the following query, which does not use orderBy, produces a set of facet values on author, in the form of a JSON object. This is the default behavior. Since the facet is an object with facetValue :count properties, the facet values are effectively unordered. import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.facets([ jsearch.facet('Author', 'author')]) .where(jsearch.byExample({price: {$lt: 50}})) .result(); // Produces the following output: // {"facets":{ // "Author":{ // "John Steinbeck":3, // "Mark Twain":4, // "Robert Frost":1} // }} If you add an orderBy clause to the facet definition, then the value of the facet is an array of arrays, where each inner array is of the form [ item_value , count ]. The array items are ordered by the frequency. For example: import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.facets([ jsearch.facet('Author', 'author').orderBy('frequency') ]) .where(jsearch.byExample({price: {$lt: 50}})) .result(); // Produces the following output: // {"facets":{ // "Author":[ // ["Mark Twain", 4], // ["John Steinbeck", 3], // ["Robert Frost", 1] // ] // }} You can also sort by item (the value of author in our example), and choose whether to list the facet values in ascending or descending order. For example, if you use the orderBy clause orderBy('item', descending), the you get the following output: {"facets":{ "Author":[ ["Robert Frost", 1], ["Mark Twain", 4], ["John Steinbeck", 3] ] }} If the default structure does not meet the needs of your application, you can modify the output using a custom mapper. For more details, see Transforming Results with Map and Reduce. By default, the result of facet generation does not include content from the documents from which the facets are derived. Add snippets, complete documents, or document projections to the results by including a documents query in your facets call. For example, the following query returns both facets and snippets for documents that contain a price property with a value less than 15: import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.facets([ jsearch.facet('Author', 'author'), jsearch.facet('MediaFormat', 'format')], jsearch.documents()) .where(jsearch.byExample({price: {$lt: 15}})) .result() Running this query against the database created by Preparing to Run the Examples produces the following output. Notice the output includes facets on author and format, plus the document search results containing snippets (in the properties property). { "facets": { "Author": { "Mark Twain": 2, "John Steinbeck": 1 }, "MediaFormat": { "paperback": 3 } }, "documents": [ { "uri": "/books/twain1.json", "path": "fn:doc(\"/books/twain1.json\")", "index": 0, "matches": [ { "path": "fn:doc(\"/books/twain1.json\")/edition/number-node(\"price\")", "matchText": [ { "highlight": "9" } ] } ] }, ...additional documents... ], "estimate": 3 } The matches property of each documents item contains the snippets. For example, if the above facets results are saved in a variable named results, then you can access the snippets for a given document through results.documents[n].matches. To include the complete documents in your facet results instead of just snippets, configure the built-in mapper on the documents query to extract all. For example: import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.facets([ jsearch.facet('Author', 'author'), jsearch.facet('MediaFormat', 'format')], jsearch.documents().map({extract:{select:'all'}})) .where(jsearch.byExample({price: {$lt: 15}})) .result() In this case, you access the document contents through the extracted property of each document. For example, results.documents[n].extracted. The extracted property value is an array because you can potentially project multiple subsets of content out of the matched document using the map and reduce features. For details, see Extracting Portions of Each Matched Document. The documents query can include where, orderBy, filter, slice, map/ reduce, and withOptions qualifiers, just as with a standalone document search. For details, see Document Search Basics. The document search combines the queries in the where qualifier of the facets query, the where qualifier of the documents query, and any othersWhere queries on facet definitions into a single AND query. For example, the following facets query includes uses all three query sources. import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.facets([ jsearch.facet('Author', 'author'), jsearch.facet('MediaFormat','format') .othersWhere(jsearch.byExample({format: 'paperback'}))], jsearch.documents() .where(jsearch.byExample({author: 'Mark Twain'}))) .where(jsearch.byExample({price: {$lt: 20}})) .result() This query has the following effect on the returned results: jsearch.facets(...).where(jsearch.byExample({price: {$lt: 20}})). format, only return facet values for documents where format is paperback. From this part of the query: jsearch.facet('MediaFormat','format').othersWhere(jsearch.byExample({format: 'paperback'})) jsearch.byExample({author: 'Mark Twain'})) Thus, the query only returns matches where all the following conditions are met: price < 20 and format is paperback and author is Mark Twain. You can use the returnQueryPlan option to explore this relationship. For example, adding a withOptions call to the documents query as shown below returns the following information in the results: ... jsearch.documents() .where(jsearch.byExample({author: 'Mark Twain'})) .withOptions({returnQueryPlan: true}) ... ==> results.queryPlan includes the following information (reformatted for readability) Search query contributed 3 constraints: cts.andQuery([ cts.jsonPropertyRangeQuery("price", "<", xs.float("20"), [], 1), cts.jsonPropertyValueQuery("format", "paperback", ["case-sensitive","diacritic-sensitive","punctuation-sensitive", "whitespace-sensitive","unstemmed","unwildcarded","lang=en"], 1), cts.jsonPropertyValueQuery("author", "Mark Twain", ["case-sensitive","diacritic-sensitive","punctuation-sensitive", "whitespace-sensitive","unstemmed","unwildcarded","lang=en"], 1) ], []) Use the FacetDefinition.othersWhere method to efficiently vary facet values across user interactions and deliver a more intuitive faceted navigation user experience. Imagine an application that enables users to filter a search using facet-based filtering controls. Each time a user interacts with the filtering controls, the application makes a request to MarkLogic to retrieve new search results and facet values that reflect the current search criteria. A naive implementation might apply the selection criteria across all facets and document results. However, this causes values to drop out of the filtering choices, making it more difficult for users to be aware of other choice or change the filters. The application could generate the values for each facet and for the matching documents independently, but this is inefficient because it requires multiple requests to MarkLogic. A better approach is to use the othersWhere method to apply criteria asymmetrically to the facets and collectively to the document search portion. The following example uses othersWhere to generate facet values for two selection criteria, an author value of Mark Twain and a format value of paperback:() When each facet applies othersWhere to selection criteria based on itself, you get multi-facet interactions. For example, the above query returns the following results. Thanks to the use of othersWhere on each facet definition, the author facet values are unaffected by the Mark Twain selection and the format facet values are unaffected by paperback selection. The document search is affected by both. {"facets":{ "Author":{"John Steinbeck":1, "Mark Twain":2, "Robert Frost":1}, "MediaFormat":{"hardback":2, "paperback":2}}, "documents":[ ...snippets for docs matching both criteria... ] } If you pass the criteria in through the where method instead, some facet values drop out, making it more difficult for users to see the available selections or to change selections. For example, the following query puts the author and format criteria in the where call, resulting in the facet values shown: import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.facets( [jsearch.facet('Author', 'author'), jsearch.facet('MediaFormat', 'format')], jsearch.documents()) .where([cts.directoryQuery('/books/'), jsearch.byExample({author: 'Mark Twain'}), jsearch.byExample({format: 'paperback'})]) .result() ==> {"facets":{ "Author":{"Mark Twain":2}, "MediaFormat":{"paperback":2}}, "documents":[ ...snippets for docs matching both criteria... ] The differences in these two approaches are explored in more detail in Example: Multi-Facet Interactions Using othersWhere. The JSearch API also includes a FacetDefinition.thisWhere modifier which has the opposite effect of othersWhere: The selection criteria is applied only to the subject facet, not to any other facets or to the document search. For details, see FacetDefinition.thisWhere in the MarkLogic Server-Side JavaScript Function Reference. This example explores the use of othersWhere to enable search selection criteria to affect related facets asymmetrically, as described in Multi-Facet Interactions Using othersWhere. This example assumes the database configuration and content described in Preparing to Run the Examples. Suppose you have an application that enables users to search for books, and the application displays facets on author and format (hardback, paperback, etc.) that can be used to narrow a search. The following diagram contrasts two possible approaches to implementing such a faceted navigation control. The middle column represents a faceted navigation control when the user's selection criteria are applied symmetrically to all factes through the where method. The rightmost column represents the same control when the user's criteria are applied asymmetrically using othersWhere. Notice that, in the rightmost column, the user can always see and select alternative criteria. The remainder of this example walks through the code that backs the results in both columns. Before the user selects any criteria, the baseline facets are generated with the following request. Facet values are generated for the author and format JSON properties. The documents in the /books/ directory seed the initial search results that the user can drill down on. (Matched documents are not shown.) // baseline - no selection criteria import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.facets([ jsearch.facet('Author', 'author'), jsearch.facet('MediaFormat', 'format') ], jsearch.documents()) .where(cts.directoryQuery('/books/')) .result() Consider the case where the user then selects an author, and the application applies the selection criteria unconditionally, resulting in the following filtering control changes: The user can no longer readily see the other available authors. These results were generated by the following query, where the cts.directoryQuery query represents the baseline search, and the jsearch.byExample query represents the user selection. Passing the author query to the where method applies it to all facets and the document search. import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.facets( [jsearch.facet('Author', 'author'), jsearch.facet('MediaFormat', 'format')], jsearch.documents()) .where([cts.directoryQuery('/books/'), jsearch.byExample({author: 'Mark Twain'})]) .result() By moving the author query to an othersWhere modifier on the author facet, you can apply the selection to other facets, such as format, and to the document search, but leave the author facet unaffected by the selection criteria. For example: import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.facets( [jsearch.facet('Author', 'author') .othersWhere(jsearch.byExample({author: 'Mark Twain'})), jsearch.facet('MediaFormat', 'format')], jsearch.documents()) .where(cts.directoryQuery('/books/')) .result() Using using othersWhere instead of where to pass the criteria results in the following display. The user can clearly see the alternative author choices and the number of items that match each other. Yet, the user can still see how his author selection affects the available media formats and the matching documents. The diagram below illustrates how the application might display the returned facet values. Snippets are returned for all documents with Mark Twain as the author. If the user chooses to further filter on the paperback media format, you can use othersWhere on the format facet to apply this criteria to the author facet values and the document search, but leave all the format facets values available. For example:() The above query results in the following display. The user can easily see and select a different author or format. The matched documents are not shown, but they consist of documents that match both the author and format selections. Use the orderBy function to control the order in which your query results are returned. You can apply an orderBy clause to a document search, word lexicon query, values query, or tuples query. Though you can use orderBy with all these query types, the specifics vary. For example, you can only specify content-based sort keys in a document search, and you can only choose between item order and frequency order on a values or tuples query. This section covers the following topics. By default, search results are returned in relevance order, with most relevant results displayed first. That is, the sort key is the relevance score and the sort order is descending. You can use the DocumentsSearch.orderBy method to change the sort key and ordering (ascending/descending). You can sort the results by features of your content, such as the value of a specified JSON property, and by attributes of the match, such as fitness, confidence, or document order. You must configure a range index for each JSON property, XML element, XML attribute, field, or path on which you sort. For example, the following code sorts results by value of the JSON property named title. A range index for the title property must exist. import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.documents() .where(jsearch.byExample({'author': { '$word': 'twain' }})) .orderBy('title') .result(); The use of a simple name in the orderBy call implies a cts.jsonPropertyReference. You can also explicitly construct a cts.reference by calling an index reference constructor such as cts.jsonPropertyReference, cts.elementReference, cts.fieldReference, or cts.pathReference. For example, the following call specifies ordering on the JSON property price: orderBy(cts.jsonPropertyReference('price')) To sort results based on search metadata such as confidence, fitness, and quality, use the cts.order constructors. For example, the following orderBy specifies sorting by confidence rather than relevance score: orderBy(cts.confidenceOrder()) You can also use the cts.order constructors to control whether results are sorted in ascending or descending order with respect to a sort key. For example, the following call sorts by the JSON property price, in ascending order: orderBy( cts.indexOrder(cts.jsonPropertyReference('price'), 'ascending')) You can specify more than one sort key. When there are multiple keys, they're applied in the order they appear in the array passed to orderBy. For example, the following call says to first order results by the price JSON property values, and then by the title values. orderBy(['price', 'title']) For details, see DocumentsSearch.orderBy in the MarkLogic Server-Side JavaScript Function Reference and Sorting Searches Using Range Indexes in the Query Performance and Tuning Guide. By default, values and tuples query results are returned in ascending item order. You can use the ValuesSearch.orderBy and TuplesSearch.orderBy methods to specify whether to order the results by value (item order) or frequency, and whether to use ascending or descending order. For example, the following query returns all the values of the price JSON property, in ascending order of the values: import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.values('price').result() ==> [8, 9, 10, 16, 18, 20, 30] The following code modifies the query to return the results in frequency order. By default, frequency order returns results in descending order (most to least frequent). In this case, the database contained multiple documents with price 18, and only a single document containing each of the other price points, so the 18 value sorted to the front of the result array, and the remaining values that share the same frequency appear in document order. import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.values('price').orderBy('frequency').result() ==> [18, 8, 9, 10, 16, 20, 30] To order the results by ascending frequency value, pass 'ascending' as the second parameter of orderBy. For example: orderBy('frequency', 'ascending') You can also include the frequency values in the results using the map or reduce methods. For details, see Querying the Values in a Lexicon or Index. When you query a word lexicon using the jsearch.words resource selector method, results are returned in ascending order. Use the WordsSearch.orderBy method to control whether the results are returned in ascending or descending order. For example, the following query returns the first 10 results in the default (ascending) order: import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.words('title').result() ==> ["Adventures", "and", "Collected", "East", "Eden", "Finn", "Grapes", "Huckleberry", "Men", "Mice"] You can use orderBy to change the order of results. For example, the following call returns the 10 results when the words in title are sorted in descending order: import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.words('title').orderBy('descending').result() ==> ["Wrath", "Works", "Tom", "The", "Sawyer", "Of", "of", "Mice", "Men", "Huckleberry"] Note that this example assumes the database configuration includes a word lexicon on the title JSON property. For more details on querying word lexicons, see Querying Values in a Word Lexicon. When you generate facets with frequencies using jsearch.facets, the values of each facet are expressed as a JSON object, so they are effectively unordered. You can use FacetDefinition.orderBy to control the sort order and change the output to a structure that can be meaningfully ordered (an array of arrays). For more details, see Sorting Facet Values with OrderBy. You can use the slice method to return a subset of the results from a top level documents, values, tuples, or words query, or when generating facets. A slice specification works like Array.slice and has the following form: slice(firstPosToReturn, lastPosToReturn + 1) The positions use a 0-based index. That is, the first item is position 0 in the result list. Thus, the following returns the first 3 documents in the classics collection: import * as jsearch from '/MarkLogic/jsearch.mjs'; const classics = jsearch.collections('classics'); classics.documents() .slice(0,3) .result() You cannot request items past the end of result set, so it is possible get fewer than the requested number of items back. When the search results are exhausted, the results property of the return value is null, just as for a search which matches nothing. For example: { results: null, estimate: 4 } Applying slice iteratively to the same query enables you to return successive pages of results. For example, the following code iterates over search results in blocks of three results at a time: import * as jsearch from '/MarkLogic/jsearch.mjs'; const sliceStep = 3; // max results per batch const sliceStart = 0; const sliceEnd = sliceStep; const response = {}; do { response = jsearch.documents().slice(sliceStart, sliceEnd).result(); if (response.results != null) { // do something with the results sliceStart += response.results.length; sliceEnd += sliceStep; } } while (response.results != null); You can set the slice end position to zero to suppress returning results when you're only interested in query metadata, such as the estimate or when using returnQueryPlan:true. For example, the following returns the estimate without results: import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.documents() .where(cts.jsonPropertyValueQuery('author', 'Mark Twain')) .slice(0,0) .result() ==> { results: null, estimate: 4 } For details, see the following methods: When you perform a document search using jsearch.documents, the result is an array or Iterable over descriptors of each match. Each descriptor includes the contents of the matching document by default. You can use snippeting to a include portion of the content around the match in each result, instead of (or in addition to) the complete document. This section covers the following topics: You can include snippets in a document query by adding a map clause to your query that sets the built-in mapper configuration property snippet to true or setting snippet to a configuration object, as described in Configuring the Built-In Snippet Generator. (Snippets are generated by default when you include any document query in a jsearch.facets operation.) For example, the following query matches occurrences of the word california and returns the default snippets instead of the matching document: import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.documents() .where(jsearch.byExample({synopsis: {$word: 'california'}})) .map({snippet: true}) .result() ==> {"results":[ {"score":28672, "fitness":0.681636929512024, "uri":"/books/steinbeck1.json", "path":"fn:doc(\"/books/steinbeck1.json\")", "confidence":0.529645204544067, "index":0, "matches":[{ "path":"fn:doc(\"/books/steinbeck1.json\")/text(\"synopsis\")", "matchText":[ "...from their homestead and forced to the promised land of ", {"highlight":"California"}, "." ] }] }, { ... }, ... ], "estimate":3 } If this was a default search (no snippets), there would be a document property instead of the matches property, as shown in Example: Basic Document Search. For more details, see DocumentsSearch.map. You can configure the built-in snippet generator by setting the built-in mapper snippet property to a configuration object instead of a simple boolean vaue. You can set the following snippet configuration properties: For example, the following configuration only returns snippets for matches occurring in the synopsis property and surrounds the highlighted matching text by at most 5 tokens. import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.documents() .where(cts.wordQuery('california')) .map({snippet: { preferredMatches: ['synopsis'], perMatchTokens: 5 }}) .result() Thus, if the word query for occurrences of california matched text in both the title and synopsis for some documents, only the matches in synopsis are returned. Also, the snippet match text is shorter, as shown below. // match text in snippet with default perMatchTokens "matchText":[ "...an unlikely pair of drifters who move from job to job as farm laborers in ", {"highlight":"California"}, ", until it all goes horribly awry." ] // match text in snippet with perMatchTokens set to 5 "matchText":[ "...farm laborers in ", {"highlight":"California"}, ", until it..." ] When snippeting over XML documents and using preferredMatches, use a QName rather than a simple string to specify namespace-qualified elements. For example: {snippet: { preferredMatches: [fn.QName('/my/namespace','synopsis')] }} For more details, see DocumentsSearch.map. To return snippets and complete documents or document projections together, set snippet to true and configure the extract property of the built-in mapper to select the desired document contents. For details about extract, see Extracting Portions of Each Matched Document. The following example returns the entire matching document in an extracted property and the snippets in the matches property of the results: import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.documents() .where(jsearch.byExample({synopsis: {$word: 'California'}})) .map({snippet: true, extract: {selected: 'all'}}) .result() ==> {"results":[ {"score":28672, "fitness":0.681636929512024, "uri":"/books/steinbeck1.json", "path":"fn:doc(\"/books/steinbeck1.json\")", "extracted":[{ "title":"The Grapes of Wrath", "author":"John Steinbeck", "edition":{"format":"paperback", "price":9.99}, "synopsis":"Chronicles the 1930s Dust Bowl migration of one Oklahoma farm family, from their homestead and forced to the promised land of California." }] "confidence":0.529645204544067, "index":0, "matches":[{ "path":"fn:doc(\"/books/steinbeck1.json\")/text(\"synopsis\")", "matchText":[ "...from their homestead and forced to the promised land of ", {"highlight":"California"}, "." ] }] }, { ... }, ... ], "estimate":3 } For more details, see DocumentsSearch.map. If the snippets and projections generated by the built-in mapper do not meet the needs of your application, you can use a custom mapper to generate customized results. For details, see Transforming Results with Map and Reduce. You can use the jsearch.documentSelect method to generate snippets from an arbitrary set of documents, such as the output from cts.search or fn.doc. The output is a Sequence of results. If the input is the result of a search that matches text, then the results include search result metadata such as score, along with your snippets. Search metadata is not included if the input is an arbitrary set of documents or the result of a search that doesn't match text, such as a collection or directory query. You must include a query in the snippet configuration when using documentSelect so the snippeter has search matches against which to generate snippets. You can also include the other properties described in Configuring the Built-In Snippet Generator. The following example uses documentSelect to generate snippets from the result of calling cts.search (instead of jsearch.documents). import * as jsearch from '/MarkLogic/jsearch.mjs'; const myQuery = cts.andQuery([ cts.directoryQuery('/books/'), cts.jsonPropertyWordQuery('synopsis', 'california')]) jsearch.documentSelect( cts.search(myQuery), {snippet: {query: myQuery}}) You can use the built-in mapper of document search to return selected portions of each document that matches a search. You can use the extraction feature with jsearch.documents and jsearch.documentSelect. This section includes the following topics: By default, a document search returns the complete document for each search match. You can use extract feature of the built-in documents mapper to extract only selected parts of each matching document instead. Such a subset of the content in a document is sometimes called a sparse document projection. This feature is similar to the query option extract-document-data. available to the XQuery Search API and the Client APIs. You use XPath expressions to identify the portions of the document to include or exclude. XPath is a standard expression language for addressing XML content. MarkLogic has extended XPath so you can also use it to address JSON. For details, see Traversing JSON Documents Using XPath in the Application Developer's Guide and XPath Quick Reference in the XQuery and XSLT Reference Guide. To generate sparse projections, configure the extract property of the built-in mapper of a document search. The property has the following form: extract: { paths: xPathExpr | [xPathExprs], selected: 'include' | 'include-with-ancestors' | 'exclude' | 'all' } Specify one or more XPath expressions in the paths value; use an array for specifying multiple expressions. The selected property controls how the content selected by the paths affects the document projection. The selected property is optional and defaults to 'include' if not present; for details, see How selected Affects Extraction. For example, the following code extracts just the title and author properties of documents containing the word California in the synopsis property. import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.documents() .where(jsearch.byExample({synopsis: {$word: 'California'}})) .map({extract: {paths: ['/title', '/author']}}) .result() The table below displays the default output of the query (without a mapper) on the left and the result of using the example extraction on the right. Notice that the document property that contains the complete document contents has been replaced with an extracted property that contains just the requested subset of content. When extracting XML content that uses namespaces, you can use namespace prefixes in your extract paths. Define the prefix bindings in the namespaces property of the mapper configuration object. For example, the following configuration binds the prefix my to the namespace URI /my/namespace, and then uses the my prefix in an extract path. import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.documentSelect(fn.doc('/books/some.xml'), { namespaces: {my: '/my/namespace'}, extract: {paths: ['/my:book/my:title']} }) Since the extraction feature is a capability of the built-in mapper for a document search, you cannot use it when using a custom mapper. If you want to return document subsets when using a custom mapper, you must construct the projections yourself. For more details on using and configuring mappers, see Transforming Results with Map and Reduce. The selected property of the extract configuration for DocumentsSearch.map determines what to include in the extracted content. By default, the extracted content includes only the content selected by the path expressions. However, you can use the select property to configure these alternatives: For example, the documents loaded by Preparing to Run the Examples have the following form: { "title": string, "author": string, "edition": { "format": string, "price": number }, "synopsis": string} The table below illustrates how various selected settings affect the extraction of the title and price properties. The first row ( 'include') also represents the default behavior when selected is not explicitly set. If the combination of paths and select selects no content for a given document, then the results contain an extractedNone property instead of an extracted property. For example: import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.documents() .where(jsearch.byExample({synopsis: {$word: 'California'}})) .map({extract: {paths: ['/no/matches'], selected: 'include'}}) .result() ==> {"results":[ { ..., "extractedNone":true, ... }]} By default, snippets are not generated when you use extraction, but you can configure your search to return both snippets and extracted content by setting snippet to true in the mapper configuration. For example, the following search: import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.documents() .where(jsearch.byExample({synopsis: {$word: 'California'}})) .map({snippet: true, extract: {paths: ['/title', '/author']}}) .result() Produces output similar to the following, with the document projects in the extracted property and the snippets in the matches property: { "results": [ { "score": 18432, "fitness": 0.71398365497589, "uri": "/books/steinbeck1.json", "path": "fn:doc(\"/books/steinbeck1.json\")", "extracted": [ { "title": "The Grapes of Wrath" }, { "author": "John Steinbeck" } ], "confidence": 0.4903561770916, "index": 0, "matches": [{ "path": "fn:doc(\"/books/steinbeck1.json\")/text(\"synopsis\")", "matchText": [ "...from their homestead and forced to the promised land of ", { "highlight": "California" }, "." ] }] }, ...] } Similarly, you can include both snippet and extract specifications in the configuration for jsearch.documentSelect. For example: import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.documentSelect( cts.search(cts.jsonPropertyWordQuery('synopsis', 'California')), {snippet: { query: cts.jsonPropertyWordQuery('synopsis', 'California') } extract: {paths: ['/title', '/author'], selected: 'include'} } ) For more details on snippeting, see Including Snippets of Matching Content in Search Results. You can control a document search with options in two ways: DocumentsSearch.withOptionsmethod. Other JSearch operations, such as lexicon searches, use a similar convention for passing options to a specific query or applying them to the entire operation. For example, the following query uses the query-specific $exact option of QBE to disable exact match semantics on the value query constructed with jsearch.byExample. However, this setting has no effect on the query constructed by cts.jsonPropertyValueQuery or on the top level cts.orQuery. import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.documents() .where(cts.orQuery([ jsearch.byExample({author: {$value: 'mark twain', $exact: false}}), cts.jsonPropertyValueQuery('author', 'john steinbeck') ])) .result() The available per-query options depend on the type of query. The mechanism for specifying per-query options depends on the construction method you choose. For details, consult the appropriate API reference. For example, cts.jsonPropertyValueQuery accepts a set of options as a parameter. through these options you can control attributes such as whether or not to enable stemming: cts.jsonPropertyValueQuery( 'author', 'mark twain', ['case-insensitive', 'lang=en']) Options that can apply to the entire search are specified using the withOptions method. For example, you can use withOptions to pass options to the underlying cts.search operation of a documents search: import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.documents() .where(cts.jsonPropertyValueQuery('author','mark twain')) .withOptions({search: ['format-xml','score-logtf']}) .result() For more details, see the following methods: Note that, specifically in the case of passing options through to cts.search, some commonly used options are surfaced directly through JSearch methods, such as the DocumentsSearch.filter method. You should use the JSearch mechanism when this overlap is present. The top level JSearch query options such as documents, values, tuples, and words include map and reduce methods you can use to tailor the final results in a variety of ways, such as including snippets in a document search or applying a content transformation. This section includes the following topics: The top level JSearch operations for document search ( documents) and lexicon queries ( values, tuples, and words) include map and reduce methods for customizing your query results. You can choose to use either map or reduce, but not both. A mapper takes in a single value and produces zero results or one result. The mapper is invoked once for each item (search result, value, or tuple) processed by the query operation. The output from the mapper is pushed on to the results array. A mapper is well suited for applying transformations to results. In broad strokes, a reducer takes in a previous result and a single value and returns either an item to pass to next invocation of the reducer, or a final result. The output from the final invocation becomes the result. Reducers are well suited for computing aggregates over a set of results. You can supply a custom mapper or reducer by passing a function reference to the map or reduce method. Some operations also have a built-in mapper and/or reducer that you can invoke by passing a configuration object in to the map or reduce method. For example, the built-in mapper for document search can be used to generate snippets. Thus, your map or reduce call can have one of the following forms: // configure the built-in mapper, if supported .map({configProperties...}) // use a custom mapper .map(function (currentItem) {...}) // configure the built-in reducer, if supported .reduce({configProperties...}) // use a custom reducer .reduce(function (prevResult, currentItem, index, state) {...}) The available configuration properties and behavior of the built-in mapper and reducer depend on the operation you apply map or reduce to. For details, see Configuring the Built-In Mapper. The following methods support map and reduce operations. For configuration details, see the MarkLogic Server-Side JavaScript Function Reference. The capabilities of the built-in mapper vary, depending on the type of query operation ( documents, values, or tuples). For example, the built-in mapper for a document search can be configured to generate snippets and document projections, while the built-in mapper on a values query can be configured to include frequency values in the results. Configure the built-in mapper by passing a configuration object to the map method instead of a function reference. For example, the following call chain configures the built-in mapper for document search to return snippets: jsearch.documents().map({snippet:true}).result() The table below outlines the capabilities of the built-in mapper for each JSearch query operation. You can supply a custom mapper to the map method of the documents, values, tuples, and words queries. To use a custom mapper, pass a function reference to the map method in your query call chain: ... .map(funcRef) The mapper function must have the following signature and should produce either no result or a single result. If the function returns a value, it is pushed on to the final results array or iterator. function (currentItem) The currentItem parameter can be a search result, tuple, value, or word, depending on the calling context. For example, the mapper on a document search (the documents method) takes a single search result descriptor as input. Any value returned by your mapper is pushed on to the results array. The following example uses a custom mapper on a document search to add a property named iWasHere to each search result. The input in this case is the search result for one document. import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.documents() .where(cts.jsonPropertyValueQuery('author','Mark Twain')) .map(function (value) {value.iWasHere = true; return value;}) .result() ==> {"results":[ {"index":0, "uri":"/books/twain4.json", "score":14336, "confidence":0.3745157122612, "fitness":0.7490314245224, "document":{...}, "iWasHere":true }, {"index":1, ...}, ... ], "estimate":4 } Your mapper is not required to return a value. If you return nothing or explicitly return undefined, then the final results will contain no value corresonding to the current input item. For example, the following mapper eliminates every other search result: import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.documents() .map(function (value) { if (value.index % 2 > 0) { return value; } }) .result().results.length If your database contains only the documents from Preparing to Run the Examples, then the script should produce the answer 4 when run in Query Console. For an additional example, see Example: Using a Custom Mapper for Content Transformation. The capabilities of the built-in reducer vary, depending on the type of query operation. Currently, only values offers a built-in reducer. Configure the built-in reducer by passing a configuration object to the reduce method instead of a function reference. For example, the following configures the built-in reducer for a values query to return item frequency data along with the values: jsearch.values('price').reduce({frequency: 'item'}).result() The table below outlines the capabilities of the built-in reducer for each JSearch query operation. To use a custom reducer, pass a function reference and optional initial seed value to the reduce method of your query call chain: ... .reduce(funcRef, seedValue) The reducer function must have the following signature: function (prevResult, currentItem, index, state) If you pass a seed value, it becomes the value of prevResult on the first invocation of your function. For example, the following reduce call seeds an accumulator object with initial values. On the first call to myFunc, prevResult contains {count: 0, value: 0, matches: []}. ... .reduce(myFunc, {count: 0, value: 0, matches: []}) ... For example, the following call chain uses a custom mapper with an initial seed value as part of a document search. import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.documents() .where(cts.jsonPropertyValueQuery('author','Mark Twain')) .reduce(function (prev, match, index, state) { // do something }, {count: 0, value: 0, matches: []}) .result() The value returned by the last invocation of your reducer becomes the final result of the query. You can detect or signal the last invocation through state.isLast. The following table describes the inputs to the reducer function: Note that the map and reduce methods are exclusive of one another. If your query uses reduce, it cannot use map. For more examples, see the following: The following example uses a custom mapper to strip everything out of the results of a document search except the matched document. For more details, see Using a Custom Mapper and DocumentsSearch.map. By default, a document search returns a structure that includes metadata about each match, such as uri and score, as well as the matched document. For example: { "results":[ { "index":0, "uri":"/books/frost1.json", "score":22528, "confidence":0.560400724411011, "fitness":0.698434412479401, "document": ...document node... }, ... ]} The following code uses a custom mapper (expressed as a lambda function) to eliminate everything except the value of the document property. That is, it eliminates everything but the matched document node. import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.documents() .where(cts.jsonPropertyValueQuery('format','paperback')) .slice(0,2) .map(match => match.document) .result() The result is output similar to the following: { "results": [ { "title": "Collected Works", "author": "Robert Frost", "edition": { "format": "paperback", "price": 29.99 }, "synopsis": "The complete works of the American Poet Robert Frost." }, { "title": "The Grapes of Wrath", "author": "John Steinbeck", "edition": { "format": "paperback", "price": 9.99 }, "synopsis": "Chronicles the 1930s Dust Bowl migration of one Oklahoma farm family, from their homestead and forced to the promised land of California." }], "estimate": 4 } The custom mapper lambda function (.map(match => match.document)) is equivalent to the following: .map(function(match) { return match.document; }) The following example demonstrates using a custom mapper to transform document content returned by a search. For more details, see Using a Custom Mapper and DocumentsSearch.map. The following example code uses a custom mapper to redact the value of the JSON property author in each document matched by the search. import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.documents() .where(cts.jsonPropertyValueQuery('format','paperback')) .slice(0,2) .map(function (match) { match.document = match.document.toObject(); match.document.author = 'READACTED'; return match; }) .result() Each time the mapper is invoked, the author property value is changed to REDACTED in the document embedded in the search result. Notice the application of toObject to the document: match.document = match.document.toObject(); This is necessary because match.document is initially a read-only document node. Applying toObject to the document node creates an in-memory, mutable copy of the contents. If your database contains the documents created by Preparing to Run the Examples, then running the script produces output similar to the following. The part of each result affected by the mapper is shown in bold. Only two results are returned because of the slice(0,2) clause on the search. { "results": [ { "index": 0, "uri": "/books/frost1.json", "score": 14336, "confidence": 0.43245348334312, "fitness": 0.7490314245224, "document": { "title": "Collected Works", "author": "REDACTED", "edition": { "format": "paperback", "price": 29.99 }, "synopsis": "The complete works of the American Poet Robert Frost." } }, { "index": 1, "uri": "/books/steinbeck1.json", "score": 14336, "confidence": 0.43245348334312, "fitness": 0.7490314245224, "document": { "title": "The Grapes of Wrath", "author": "REDACTED", "edition": { "format": "paperback", "price": 9.99 }, "synopsis": "Chronicles the 1930s Dust Bowl migration of one Oklahoma farm family, from their homestead and forced to the promised land of California." } } ], "estimate": 4 } The following example demonstrates using DocumentsSearch.reduce to apply a custom reducer as part of a document search. The search selects a random sample of 1000 documents by setting the search scoring algorithm to score-random in withOptions. and the slice size to 1000 with slice. Notice that there is no where clause, so the search matches all documents in the database. The following code snippet is the core search that drives the reduction: jsearch.documents() .slice(0, 1000) .reduce(...) .withOptions({search: 'score-random'}) .result(); The reducer iterates over the node names (JSON property names or XML element names) in each document, adding each name to a map, along with a corresponding counter. function nameExtractor(previous, match, index, state) { const nameCount = 0; for (const name of match.document.xpath('//*/fn:node-name(.)')) { nameCount = previous[name]; previous[name] = (nameCount > 0) ? nameCount + 1 : 1; } return previous; } Each time the reducer is invoked, the match parameter contains the search result for a single document. That is, input of the following form. The precise properties in the input object can vary somewhat, depending on the search options. { index: 0, uri: '/my/document/uri', score: 14336, confidence: 0.3745157122612, fitness: 0.7490314245224, document: { documentContents } } The following code puts all of the above together in a complete script. Notice that an empty object ( { } ) is passed to reduce as a seed value for the initial value of the previous input parameter. import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.documents() .slice(0, 1000) .reduce(function nameExtractor(previous, match, index, state) { const nameCount = 0; for (const name of match.document.xpath('//*/fn:node-name(.)')) { nameCount = previous[name]; previous[name] = (nameCount > 0) ? nameCount + 1 : 1; } return previous; }, {}) .withOptions({search: 'score-random'}) .result(); Running this script with the documents created by Preparing to Run the Examples produces output similar to the following. {"results":{ "title":8, "author":8, "edition":8, "format":8, "price":8, "synopsis":8 }, "estimate":8} The property names are the JSON property names found in the sample documents. The property values are the number of occurrencesoccurrences of each name in the sampled documents. The values in this case are all the same because all the sample documents contain exactly the same properties. However, if you run the query on a less homogeneous set of documents you might get results such as the following: {"results":{ "Placemark":52, "name":53, "Style":52, "ExtendedData":52, "SimpleData":208, "Polygon":574, "coordinates":610, "MultiGeometry":24, }, "estimate":58 } If you want to retain the search results along with whatever computation is performed by your reducer, you must accumulate them yourself. For example, the reducer in the following script accumulates the results in an array in the result object: import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.documents() .where(cts.jsonPropertyValueQuery('author','Mark Twain')) .reduce(function (prev, match, index, state) { prev.count++; prev.value += match.document.edition.price; prev.matches.push(match); if (state.isLast) { return {avgCost: prev.value / prev.count, matches: prev.matches}; } else { return prev; } }, {count: 0, value: 0, matches: []}) .result() When run against the sample data from Preparing to Run the Examples, the output is similar to the following: {"results":{ "avgCost": 13.25, "matches": [{"index":0, "uri": ...}, ...more matches...] }, estimate: 4 } This example demonstrates using ValuesSearch.reduce to apply a custom reducer that computes an aggregate value from the results of a values query. The example relies on the sample data from Preparing to Run the Examples. The query that produces the inputs to the reduction is a values query over the price JSON property. The database configuration should include a range index over price with scalar type float. The scalar type of the index determines the datatype of the value passed into the second parameter of the reducer. The following code computes an average of the values of the price JSON property. Each call to the reducer accumulates the count and sum contributing to the final answer. When state.isLast becomes true, the final aggregate value is computed and returned. The reduction is seeded with an initial accumulator value of {count: 0, sum: 0}, through the second parameter passed to reduce. import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.values('price') .where(cts.directoryQuery('/books/')) .reduce(function (accum, value, index, state) { const freq = cts.frequency(value); accum.count += freq; accum.sum += value * freq; return state.isLast ? (accum.sum / accum.count) : accum; }, {count: 0, sum: 0}) .result(); If you run the query in Query Console using the data from Preparing to Run the Examples, you should see output similar to the following: 16.125 Notice the use of cts.frequency in the example. The reducer is called once for each unique value in the index. If you're doing a reduction that depends on frequency, use cts.frequency on the input value to get this information. Average and sum are only used here as a convenient simple example. In practice, if you needed to compute the average or sum, you would use built-in aggregate functions. For details, see Computing Aggregates Over Range Indexes. Use jsearch.values to begin building a query over the values in a values lexicon or range index, and then use result to execute the query and return results. You can also use the values method to compute aggregates lexicon and index values; for details, see Computing Aggregates Over Range Indexes. For example, the following code creates a values query over a range index on the title JSON property. The returned values are limited to those found in documents matching a directory query ( where) and those that match the pattern *adventure* ( match). The results are returned in frequency order ( orderBy). Only the first 3 results are returned ( slice). import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.values('title') .where(cts.directoryQuery('/books/')) .match('*adventure*') .orderBy('frequency') .slice(0,3) .result() This query produces the following output when run against the sample data from Preparing to Run the Examples. ["Adventures of Huckleberry Finn", "Adventures of Tom Sawyer"] Your database configuration must include an index or range index on each JSON property, XML element, XML element attribute, field, or path used in a values query. For general information on lexicon queries, see Browsing With Lexicons. Build and execute your values query following the pattern described in Query Design Pattern. The following table maps the applicable JSearch methods to the steps in the design pattern. Note that all the pipeline stages in Step 2 are optional, but you must use them in the order shown. For more details, see ValuesSearch in the MarkLogic Server-Side JavaScript Function Reference. Use the jsearch.tuples method to find co-occurrences of values in lexicons and range indexes. Use tuples to begin building your query, and then use result to execute the query and return results. You can also use the tuples method to compute aggregates over tuples; for details, see Computing Aggregates Over Range Indexes. For example, the following code creates a tuples query for 2-way co-occurences of the values in the author and format JSON properties. Only tuples in documents matching the directory query are considered ( where). The results are returned in item order ( orderBy). import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.tuples(['author','format']) .where(cts.directoryQuery('/books/')) .orderBy('item') .result() This query produces the following output when applied to the data from Preparing to Run the Examples. [["John Steinbeck", "audiobook"], ["John Steinbeck", "hardback"], ["John Steinbeck", "paperback"], ["Mark Twain", "hardback"], ["Mark Twain", "paperback"], ["Robert Frost", "paperback"]] Your database configuration must include an index or range index on each JSON property, XML element, XML element attribute, field, or path used in a tuples query. Build and execute your tuples query following the pattern described in Query Design Pattern. The following table maps the applicable JSearch methods to the steps in the design pattern. Note that all the pipeline stages in Step 2 are optional, but you must use them in the order shown. For more details, see TuplesSearch in the MarkLogic Server-Side JavaScript Function Reference. Use the jsearch.words method to create a word lexicon query, and then use result to execute the query and return results. For example, the following code performs a word lexicon query for all words in the synopsis JSON property that begin with 'c' ( match). Only occurrences in documents where the author property contains steinbeck ( where) are returned. At most the first 5 words are returned ( slice). import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.words('synopsis') .where(cts.jsonPropertyWordQuery('author', 'steinbeck')) .match('c*') .slice(0,5) .result(); When run against the data from Preparing to Run the Examples, this query produces the following output: ["Cain", "California", "Chronicles"] Your database configuration must either enable the database-wide word lexicon or include a word lexicon on each JSON property, XML element, XML element attribute, or field used in a words query. For details on lexicon configuration, see Range Indexes and Lexicons in the Administrator's Guide. For general information on lexicon queries, see Browsing With Lexicons. Build and execute your word query following the pattern described in Query Design Pattern. The following table maps the applicable JSearch methods to the steps in the design pattern. Note that all the pipeline stages in Step 2 are optional, but you must use them in the order shown. For more details, see WordsSearch in the MarkLogic Server-Side JavaScript Function Reference. You can compute aggregate values over range indexes and lexicons using built-in or user-defined aggregate functions using ValuesSearch.aggregate or TuplesSearch.aggregate. This section covers the following topics: An aggregate function performs an operation over values or tuples in lexicons and range indexes. For example, you can use an aggregate function to compute the sum of values in a range index. You can apply an aggregate computation to the results of a values or tuples query using ValuesSearch.aggregate or TuplesSearch.aggregate. MarkLogic Server provides built-in aggregate functions for many common analytical functions; for a list of functions, see Using Built-In Aggregate Functions. For a more detailed description of each built-in, see Using Builtin Aggregate Functions in the Search Developer's Guide. You can also implement aggregate user-defined functions (UDFs) in C++ and deploy them as native plugins. Aggregate UDFs must be installed before you can use them. For details, see Implementing an Aggregate User-Defined Function in the Application Developer's Guide. You must install the native plugin that implements your UDF according to the instructions in Using Native Plugins in the Application Developer's Guide. You cannot use the JSearch API to apply aggregate UDFs that require additional parameters. Build and execute your aggregate computation following the pattern described in Query Design Pattern. The following table maps the applicable JSearch methods to the steps in the design pattern. Note that you must use the pipeline stages in Step 2 in the order shown. For more details, see ValuesSearch or TuplesSearch in the MarkLogic Server-Side JavaScript Function Reference. To use a builtin aggregate function, pass the name of the function to the aggregate method of a values or tuples query. The built-in aggregate functions only support tuples queries on 2-way co-occurrences. That is, you cannot use them on tuples queries involving more than 2 lexicons or indexes. The following example uses built-in aggregate functions to compute the minimum, maximum, and average of the values in the price JSON property and produces the results shown. As with all values queries, the database must include a range index over the target property or XML element. import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.values('price') .aggregate(['min','max','avg']) .result(); ==> {"min":8, "max":30, "avg":16.125} The following built-in aggregate functions are supported on values queries: The following built-in aggregate functions are supported on tuples queries: An aggregate UDF is identified by the function name and a relative path to the plugin that implements the aggregate, as described in Using Aggregate User-Defined Functions. You must install your UDF plugin on MarkLogic Server before you can use it in a query. For details on creating and installing aggregate UDFs, see Aggregate User-Defined Functions in the Application Developer's Guide. Once you install your plugin, use jsearch.udf to create a reference to your UDF, and pass the reference to the aggregate clause of a values or tuples query. For example, the following script uses a native UDF called count provided by a plugin installed in the modules database under native/sampleplugin: import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.values('price') .aggregate(jsearch.udf('native/sampleplugin', 'count')) .result(); For more details, see ValuesSearch.aggregate and TuplesSearch.aggregate. This section provides a brief overview of the functions available for constructing the index and lexicon reference you may need for values queries, tuples queries, and facet generation. Most JSearch interfaces that accept index or lexicon references also accept a simple JSON property name string. In most contexts, this is interpreted as a cts.jsonPropertyReference for a string property. If the referenced property (and associated index) have a type other than string, you can create a properly typed index reference as shown in these examples: cts.jsonPropertyReference('price', ['type=float']) cts.jsonPropertyReference('start', ['type=date']) Similar reference constructors are available for XML element indexes, XML element attribute index, path indexes, field indexes, and geospatial property, element, and path indexes. The following is a small sample of the available constructors: Use the following reference constructors for the database-wide URI and collection lexicons. (These lexicons must be enabled on the database before you can use them.) JSearch also provides the following word lexicon reference constructors for constructing references to word lexicons specifically for use with jsearch.words. Using these constructors ensures you only create word lexicons queries on lexicon types that support them. For more details, see the MarkLogic Server-Side JavaScript Function Reference and Browsing With Lexicons. This section covers the following topics related to using the ValuesSearch.groupInto and FacetDefinition.groupInto to group values by range: You can use the groupInto method to group values into ranges when performing a values query or generating facets. Such grouping is sometimes called bucketed search. The groupInto method of values and facets has the following form: groupInto(bucketDefinition) You can apply groupInto to a values query or a facet definition. For example: // using groupInto with a values query jsearch.values(...).groupInto(bucketDefinition).result() // using groupInto for facet generation jsearch.facets( jsearch.facet(...).groupInto(bucketDefinition), ...more facet definitions... ).result() A bucket definition can be an array of boundary values or an array of alternating bucket names and boundary value pairs. For geospatial buckets, a boundary value can be an object with lat and lon properties ( {lat: latVal, lon: lonVal}). The JSearch API includes helper functions for creating bucket names (jsearch.bucketName), generating a set of buckets from a value range and step (jsearch.makeBuckets), and generating buckets corresponding to a geospatial heatmap ( jsearch.makeHeatmap). Buckets can be unnamed, use names generated from the boundary values, or use custom names. For example: // Unnamed buckets with boundaries X < 10, 10 <= X < 20, and X > 20 groupInto([10,20]) // The same set of buckets with generated default bucket names groupInto([ jsearch.bucketName(),10, jsearch.bucketName(),20, jsearch.bucketName()]) // The same set of buckets with custom bucket names groupInto([ jsearch.bucketName('under $10'), 10, jsearch.bucketName('$10 to $19.99'), 20, jsearch.bucketName('over $20')]) // Explictly specify geospatial bucket boundaries groupInto([ jsearch.bucketName(),{lat: lat1, lon: lon1, jsearch.bucketName(),{lat: lat2, lon: lon2, jsearch.bucketName(),{lat: lat3, lon: lon3}]) You can create a bucket definition in the following ways: [10,20]defines 3 buckets with boundaries X < 10, 10 <= X < 20, and X > 20. buckeNamehelper function to generate the name of each bucket. You can specify custom bucket names or groupInto generate bucket names from the boundary values. makeBucketshelper function to create a set of buckets over a range of values (min and max) and a step or number of divisions. For example, create a series of buckets that each correspond to a decade over a 100 year time span. makeHeatMaphelper function to generate buckets from a geospatial lexicon based on a heatmap box with latitude and longitude divisions. The bounds for bucket for a scalar value or date/time range are determined by an explicit upper bound and the position of the bucket in a set of bucket definitions. For example, in the following custom bucket definition, each line represents one bucket as a name and upper bound. import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.values('price') .groupInto([ jsearch.bucketName(), 10, jsearch.bucketName(), 20, jsearch.bucketName()]) .result() The first bucket has no lower bound because it occurs first. The lower bound of the second bucket is the upper bound of the previous bucket (10), inclusive. The upper bound of the second bucket is 20, exclusive. The last bucket has no upper bound. When plugged into a values or facets query, the results are grouped into the following ranges: x < 10 10 <= x < 20 20 <= x For geospatial data, you can use makeHeatMap to sub-divide a region into boxes. For example, the following constraint includes a heat map that corresponds very roughly to the continental United States, and divides the region into a set of 20 boxes (5 latitude divisions and 4 longitude divisions). import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.values('incidents') .groupInto(makeHeatMap({ north: 49.0, east: -67.0, south: 24.0, west: -125.0, lonDivs: 4, latDivs: 5 })) .result() When combined with a reducer that returns frequency, you can use the resulting set of boxes and frequencies to illustrate the concentration of points in each box, similar to a grid-based heat map. You can create more customized geospatial buckets by specifying a series of latitude bounds and longitude bounds that define a grid in an object of the form {lat:[...], lon:[...]}. The points defined by the latitude bounds and longitude bounds are divided into box-shaped buckets. The lat and lon values must be ascending order. For example: import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.values('incidents') .groupInto({lat: [15, 30, 45, 60, 75], lon: [0, 30, 60, 90, 120]}) .result() For more details, see jsearch.makeHeatmap, cts:geospatial-boxes, and Creating Geospatial Facets. The examples in this section demonstrate the following features: The example uses makeBuckets to group date information by month, leveraging MarkLogic's built-in support for date, time and duration data.. The example assumes the following conditions exist in the database: startproperty that represents the start date of the event. { title: 'San Francisco Ocean Film Festival', venue: 'Fort Mason, San Francisco', start: '2015-02-27', end: '2015-03-01' } startproeprty. The following query groups the values in the lexicon for the year 2015 by month, using jsearch.makeBuckets and ValuesSearch.groupInto. The results include frequency data in each bucket.({frequency: 'item', names: ['bucket', 'count']}) .result() Notice the use of a 1 month duration (...ÄòP1M') for the step between buckets. You can use many MarkLogic date, dateTime, and duration operations from Server-side JavaScript. For details, see JavaScript Duration and Date Arithmetic and Comparison Methods in the JavaScript Reference Guide. The query generates results similar to the following: [ { "bucket": { "minimum": "2015-02-27", "maximum": "2015-02-27", "lowerBound": "2015-02-01", "upperBound": "2015-03-01" }, "count": 1 }, { "bucket": { "minimum": "2015-03-07", "maximum": "2015-03-14", "lowerBound": "2015-03-01", "upperBound": "2015-04-01" }, "count": 2 }, ... ] You can use a custom mapper to name each bucket after the month it covers. Note that plugging in a custom mapper also eliminates the frequency data, so you must add it back in explicitly. The following example mapper adds a month name and count property to each bucket: // For mapping month number to user-friendly bucket name const months = [ 'January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December' ]; // Add a name and count field to each bucket. Use month for name. function supplementBucket(bucket) { // get a mutable copy of the input const result = bucket.toObject(); // Compute index into month names. January == month 1 == index 0. const monthNum = fn.monthFromDate(xs.date(bucket.lowerBound)) - 1; result.name = months[monthNum]; result.count = cts.frequency(bucket); return result; }; // Generate buckets and counts(supplementBucket) .result() The output generated is similar to the following: [ { "minimum": "2015-02-27", "maximum": "2015-02-27", "lowerBound": "2015-02-01", "upperBound": "2015-03-01", "name": "February", "count": 1 }, { "minimum": "2015-03-07", "maximum": "2015-03-14", "lowerBound": "2015-03-01", "upperBound": "2015-04-01", "name": "March", "count": 2 }, ... ] Similarly, you can use the FacetDefinition.groupInto and FacetDefinition.map when generating facets for a document search with jsearch.facets. For example, the following query generates facets based on the same set of buckets: import * as jsearch from '/MarkLogic/jsearch.mjs'; const events = jsearch.collections('events'); events.facets( events.facet('events', cts.jsonPropertyReference('start', ['type=date'])) .groupInto(jsearch.makeBuckets({ min: xs.date('2015-01-01'), max: xs.date('2015-12-31'), step: xs.yearMonthDuration('P1M')})) .map(supplementBucket), events.documents() ).result() The output from this query is similar to the following: {"facets": { "events": [ { "minimum": "2015-02-27", "maximum": "2015-02-27", "lowerBound": "2015-02-01", "upperBound": "2015-03-01", "name": "February", "count": 1 }, { "minimum": "2015-03-07", "maximum": "2015-03-14", "lowerBound": "2015-03-01", "upperBound": "2015-04-01", "name": "March", "count": 2 }, ... ]}, "documents": [ ...] } For more details on faceting, see Including Facets in Search Results. This example demostrates how to use custom buckets for grouping. The example applies the grouping to facet generation, but you can use the same technique with a values query. The following code defines custom buckets that group the values of the 'price' JSON property into 3 price range buckets. import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.facets( jsearch.facet('Price','price') .groupInto([ jsearch.bucketName('under $10'), 10, jsearch.bucketName('$10 to $19.99'), 20, jsearch.bucketName('over $20') ])) .where(cts.directoryQuery('/books/')) .result(); If the lexicon contains the values [8, 9, 10, 16, 18, 20, 30], then the query results in the following output. (Comments were added for clarity and are not part of the actual output.) {"facets": { "price": { "under $10": { // bucket label (for display purposes) "value": { "minimum": 8, // min value found in bucket range "maximum": 9, // max value found in bucket range "upperBound": 10 // bucket upper bound }, "frequency": 2 }, "$10 to $19.99": { "value": { "minimum": 10, "maximum": 18, "lowerBound": 10, "upperBound": 20 }, "frequency": 4 }, "over $20": { "value": { "minimum": 20, "maximum": 30, "lowerBound": 20 }, "frequency": 2 } } } } The results tell you, for example, that the price lexicon contains values under 10, with the maximum value in that range being 9 and the minimum being 8. Similarly, the lexicon contains values greater than or equal to 10, but less than 20. The minimum value found in that range is 10 and the maximum value is 18. f you use the same grouping specification with ValuesSearch.groupInto, you get the same information, but it is arranged slightly differently. For example, the following output was produced using the values operation with the same groupInto clause. [ { "minimum": 8, "maximum": 9, "upperBound": 10, "name": "under $10" }, { "minimum": 10, // min value found in bucket range "maximum": 18, // max value found in bucket range "lowerBound": 10, // bucket lower bound "upperBound": 20, // bucket upper bound "name": "$10 to $19.99" // bucket label (for display purposes) }, { "minimum": 20, "maximum": 30, "lowerBound": 20, "name": "over $20" } ] If you specify an empty bucket name, a default name is generated from the bucket bounds. For example, the following code applies a similar set of buckets to a values query, using generated bucket names: import * as jsearch from '/MarkLogic/jsearch.mjs'; jsearch.values('price') .where(cts.directoryQuery('/books/')) .groupInto([ jsearch.bucketName(), 10, jsearch.bucketName(), 20, jsearch.bucketName() ]) .result(); This code produces the following output. The bucket min, max, and bounds are the same as before, but the bucket names are the default generated ones: [ { "minimum": 8, "maximum": 9, "upperBound": 10, "name": "x < 10" }, { "minimum": 10, "maximum": 19, "lowerBound": 10, "upperBound": 20, "name": "10 <= x < 20" }, { "minimum": 20, "maximum": 30, "lowerBound": 20, "name": "20 <= x" } ] Use the instructions and scripts in this section to set up your MarkLogic environment to run the examples in this chapter. This includes loading the sample documents and configuring your database to have the required indexes and lexicons. This section guides you through creation of a database configured to run the examples in this chapter. Many examples do not require the indexes, and only the word lexicon query examples require a word lexicon. However, this setup will ensure you have the configuration needed for all the examples. Running the setup scripts below will do the following. The configuration details are summarized in a table at the end of the section. title, author, format, and priceJSON properties found in the sample documents. The instructions below use Query Console and XQuery to create and configure the database. You do not need to know XQuery to use these instructions. However, if you prefer to do the setup manually using the Admin Interface, see the table at the end of this section for configuration details. Follow this procedure to create and configure the example database. xquery version "1.0-ml"; (: Create the database and forest :) import module namespace admin = "" at "/MarkLogic/admin.xqy"; let $config := admin:get-configuration() let $config := admin:database-create( $config, "jsearch-ex", xdmp:database("Security"), xdmp:database("Schemas")) let $config := admin:forest-create( $config, "jsearch-ex-1", xdmp:host(), (), (), ()) return admin:save-configuration($config); (: Attach the forest to the database :) import module namespace admin = "" at "/MarkLogic/admin.xqy"; let $config := admin:get-configuration() let $config := admin:database-attach-forest( $config, xdmp:database("jsearch-ex"), xdmp:forest("jsearch-ex-1")) return admin:save-configuration($config); xquery version "1.0-ml"; import module namespace admin = "" at "/MarkLogic/admin.xqy"; let $title-index := admin:database-range-element-index( "string", "", "title", "", fn:false()) let $author-index := admin:database-range-element-index( "string", "", "author", "", fn:false()) let $format-index := admin:database-range-element-index( "string", "", "format", "", fn:false()) let $price-index := admin:database-range-element-index( "float", "", "price", "", fn:false()) let $config := admin:get-configuration() let $config := admin:database-add-range-element-index( $config, xdmp:database("jsearch-ex"), ($title-index, $author-index, $format-index, $price-index)) return admin:save-configuration($config); import module namespace admin = "" at "/MarkLogic/admin.xqy"; let $title-lexicon := admin:database-element-word-lexicon( "", "title", "") let $config := admin:get-configuration() let $config := admin:database-add-element-word-lexicon( $config, xdmp:database("jsearch-ex"), ($title-lexicon)) return admin:save-configuration($config); You should now proceed to Loading the Sample Documents. If you choose to create the example environment manually with the Admin Interface, use the configuration summary below. After you create and configure the sample database, follow the instructions in this section to load the sample documents. jsearch-exdatabase. For example, navigate to the following URL is MarkLogic is installed on localhost: JavaScriptin the Query Type dropdown. jsearch-exin the Content Source dropdown. You will not see it if you have just finished creating and configuring the database and are still using the same Query Console session. If this happen, reload Query Console in your browser to refresh the Content Source list. const directory = '/books/'; const books = [ {uri: 'frost1.json', data: { title: 'Collected Works', author: 'Robert Frost', edition: {format: 'paperback', price: 30 }, synopsis: 'The complete works of the American Poet Robert Frost.' }}, {uri: 'twain1.json', data: { title: 'Adventures of Tom Sawyer', author: 'Mark Twain', edition: {format: 'paperback', price: 9 }, synopsis: 'Tales of mischief and adventure along the Mississippi River with Tom Sawyer, Huck Finn, and Becky Thatcher.' }}, {uri: 'twain2.json', data: { title: 'Adventures of Tom Sawyer', author: 'Mark Twain', edition: {format: 'hardback', price: 18 }, synopsis: 'Tales of mischief and adventure along the Mississippi River with Tom Sawyer, Huck Finn, and Becky Thatcher.' }}, {uri: 'twain3.json', data: { title: 'Adventures of Huckleberry Finn', author: 'Mark Twain', edition: {format: 'paperback', price: 8 }, synopsis: 'The adventures of Huck, a boy of 13, and Jim, an escaped slave, rafting down the Mississippi River in pre-Civil War America.' }}, {uri: 'twain4.json', data: { title: 'Adventures of Huckleberry Finn', author: 'Mark Twain', edition: {format: 'hardback', price: 18 }, synopsis: 'The adventures of Huck, a boy of 13, and Jim, an escaped slave, rafting down the Mississippi River in pre-Civil War America.' }}, {uri: 'steinbeck1.json', data: { title: 'The Grapes of Wrath', author: 'John Steinbeck', edition: {format: 'paperback', price: 10 }, synopsis: 'Chronicles the 1930s Dust Bowl migration of one Oklahoma farm family, from their homestead and forced to the promised land of California.' }}, {uri: 'steinbeck2.json', data: { title: 'Of Mice and Men', author: 'John Steinbeck', edition: {format: 'hardback', price: 20 }, synopsis: 'A tale of an unlikely pair of drifters who move from job to job as farm laborers in California, until it all goes horribly awry.' }}, {uri: 'steinbeck3.json', data: { title: 'East of Eden', author: 'John Steinbeck', edition: {format: 'audiobook', price: 16 }, synopsis: 'Follows the intertwined destinies of two California families whose generations reenact the fall of Adam and Eve and the rivalry of Cain and Abel.' }} ]; books.forEach( function(book) { xdmp.eval( 'declareUpdate(); xdmp.documentInsert(uri, data, xdmp.defaultPermissions(), ["classics"]);', {uri: directory + book.uri, data: book.data} ); }); The jsearch-ex database is now fully configured to support all the samples in this chapter in Query Console. When running the examples, set the Content Source to jsearch-ex and the Query Type to JavaScript.
http://docs.marklogic.com/guide/search-dev/javascript
2020-08-03T11:51:23
CC-MAIN-2020-34
1596439735810.18
[]
docs.marklogic.com
sets the path to the server script file for flash mode // on init state var myForm = new dhtmlXForm("parentId", [ { type: "upload", name: "my_uploader", ..., swfUrl: "../../samples/07_uploader/php/dhtmlxform_item_upload.php" } ]); // using method var myUploader = myForm.getUploader("my_uploader"); myUploader.setSWFURL("../../samples/07_uploader/php/dhtmlxform_item_upload.php"); read the details on the swfURL param path should be related to the client swf file, since some last versions of flash in Internet Explorer this path is relative to index page, so we recommend you keep index html page and swf file in the same folder, in this case swfUrl will be relative both to swf file and to index page i.e. Back to topBack to top // tree struct /* /script/my_page.html /script/uploader.swf /script/server/dhtmlxform_item_upload.php /script/server/uploaded_files/ */ var myForm = new dhtmlXForm("parentId", [ { type: "upload", name: "my_uploader", url: "php/dhtmlxform_item_upload.php", swfPath: "../../codebase/ext/uploader.swf", swfUrl: "../../samples/07_uploader/php/dhtmlxform_item_upload.php" } ]);
https://docs.dhtmlx.com/api__dhtmlxfileuploader_setswfurl.html
2020-08-03T11:39:50
CC-MAIN-2020-34
1596439735810.18
[]
docs.dhtmlx.com
This update was released on May 16th, 2019. Launcher General - Added the -uppercasetextand -lowercasetextcommand line arguments. These make all the localisable text in the Mod Launcher and messages it shows upper or lower case respectively. - Made it so file/folder browse dialogues fall back to Windows XP style versions in the event the Mod Launcher or .NET fail to create Windows Vista style dialogues. - This resolves an issue where this happened on Windows 10 1809 when using a High Contrast theme. - This was also able to happen on Windows 7 with "Disable visual themes" ticked in the executable's compatibility settings. Mods List - Added a "Non-enabled Favourites" category. - Added a "Descriptionless Hacks" category when using the -testingcommand line argument. - Added the existing shortcut keys to the right click menu items for "Search...", "Pages" and "Reload". Launcher Settings Fixed an issue where "Copy Require Line" was disabled in the right click menu of "Hack Support" on the "Non-mod Hacks" page (which is only visible with "Show Advanced Hacks" ticked). Account Integration Added the -nodtcomms command line argument. This disables Donut Team Account Integration. Localisation There are some fixes and improvements to localisation in this version as well as new strings. - Fixed an issue where the "Jump List" tickbox on the "Manage Configurations..." window did not correctly handle being repositioned/resized with language localisation. - Made non-ingame messages from the Custom Files, Custom Save Data, Debug Test and Screenshots hacks support language localisation. A new template language (Template_1.23.xml) was published on this page including all the new strings. Hacks General Added descriptions to the following hacks: - Anti-aliasing - Cancellable Initial Walk - Dynamic Tree Node Entity Limits - Force Mission Select Level Reload - Lens Flare - Letterbox - Mirror Mode - No Cursor Until Mouse Move - No Go To Objective Camera Focus - No Mute on Defocus - No Pause on Defocus - No Suppressed Drivers - One Tap Player Car Death - Skip Main Menu - Skippable Start Cameras - Starting Coins Updated the descriptions of the following hacks: - Discord Rich Presence - Frame Limiter - HUD Map Ignore Player Height - NVIDIA Highlights - No Cheats - No Fast Car Reset - No Introduction Movies - No Jump Limit - No Mission Start Cameras - Replayable Bonus Missions Made it so the following hacks can no longer be required by mods: - Ambient Car Support - Aspect Ratio Support - Custom Audio Format Support - Custom Main Menu Items - Custom Save Data - Interprocess Communication - Lua Support - Road Names Attempting to require these hacks in a mod that doesn't have a RequiredLauncher specified or has RequiredLauncher set to a version lower than 1.23 will be ignored for backwards compatibility reasons. Hack Support - Added the -hookd3dcommand line argument. - Added the -nohandlefilenotfoundcommand line argument. - Added the -nohardwareskinningand -hardwareskinningcommand line arguments. - Added the -noreloadcarcameradatacommandcommand line argument to opt out of this. - Made the game terminate abruptly with no message or crash dump in the case that an exception occurs in the hack window procedure event (instead of letting Windows swallow the exception and likely cause corruption). Custom Controller Support Added this new advanced hack that other hacks can use to extend the game's controller support. Direct3D 9 Added this new hack that makes the game use Direct3D 9 instead of Direct3D 8. This may be helpful when trying to use 3rd party tools such as ReShade. Lens Flare - Made this hack work differently (using an occlusion query instead of a lockable render target) when using Direct3D 9. - Also added a -noocclusioncommand line argument to opt out of this. - Also added a -occlusionsleepcommand line argument. No Cursor Until Mouse Move Made this hack only show the cursor if it moves more than 5 pixels on either axis. This tolerance is customisable in the hack's settings (0 meaning any movement and -1 meaning the old behaviour where it will show up any time the window receives a mouse move event even if the cursor's position doesn't change). Resizable Window -. Screenshots Improved error handling. XInput Added this new hack that.
https://docs.donutteam.com/docs/lucasmodlauncher/versions/version_1.23
2020-08-03T12:26:43
CC-MAIN-2020-34
1596439735810.18
[]
docs.donutteam.com
Method: payments.sendPaymentForm Back to methods index Send compiled payment form Parameters: Return type: payments.PaymentResult.
https://docs.madelineproto.xyz/API_docs/methods/payments.sendPaymentForm.html
2020-08-03T12:23:32
CC-MAIN-2020-34
1596439735810.18
[]
docs.madelineproto.xyz
Very often you will need to upload user images such as a profile image. Masonite let's you handle this very elegantly and allows you to upload to both the disk, and Amazon S3 out of the box. The UploadProvider Service Provider is what adds this functionality. Out of the box Masonite supports the disk driver which uploads directly to your file system and the s3 driver which uploads directly to your Amazon S3 bucket. You may build more drivers if you wish to expand Masonite's capabilities. If you do create your driver, consider making it available on PyPi so others may install it into their project. Read the "Creating an Email Driver" for more information on how to create drivers. Also look at the drivers directory inside the MasoniteFramework/core repository. All uploading configuration settings are inside config/storage.py. The settings that pertain to file uploading are just the DRIVER and the DRIVERS settings. This setting looks like: DRIVER = os.getenv('STORAGE_DRIVER', 'disk') This defaults to the disk driver. The disk driver will upload directly onto the file system. This driver simply needs one setting which is the location setting which we can put in the DRIVERS dictionary: DRIVERS = {'disk': {'location': 'storage/uploads'}} This will upload all images to the storage/uploads directory. If you change this directory, make sure the directory exists as Masonite will not create one for you before uploading. Know that the dictionary inside the DRIVERS dictionary should pertain to the DRIVER you set. For example, to set the DRIVER to s3 it will look like this: DRIVER = 's3'DRIVERS = {'disk': {'location': 'storage/uploads'},'s3': {'client': os.getenv('S3_CLIENT', 'AxJz...'),'secret': os.getenv('S3_SECRET', 'HkZj...'),'bucket': os.getenv('S3_BUCKET', 's3bucket'),}} Some deployment platforms are Ephemeral. This means that either hourly or daily, they will completely clean their file systems which will lead to the deleting of anything you put on the file system after you deployed it. In other words, any user uploads will be wiped. To get around this, you'll need to upload your images to Amazon S3 or other asset hosting services which is why Masonite comes with Amazon S3 capability out of the box. You can also explicitly declare the driver as a class: from masonite.drivers import UploadS3DriverDRIVER = UploadS3Driver Uploading with masonite is extremely simple. We can use the Upload class which is loaded into the container via the UploadProvider Service Provider. Whenever a file is uploaded, we can retrieve it using the normal request.input() method. This will look something like: <html><body><form action="/upload" method="POST" enctype="multipart/form-data"><input type="file" name="file_upload"></form></body></html> And inside our controller we can do: from masonite import Uploaddef upload(self, upload: Upload):upload.driver('disk').store(request.input('file_upload')) That's it! We specified the driver we want to use and just uploaded an image to our file system. This action will return the file name. We could use that to input into our database if we want. All file uploads will convert the filename into a random 25 character string. upload.driver('disk').store(request.input('file_upload'))#== '838nd92920sjsn928snaj92gj.png' Lastly, we may can specify a filename directly using the filename keyword argument: upload.driver('disk').store(request.input('file_upload'), filename="username.profile")#== username.profile.png By default, Masonite only allows uploads to accept images for security reasons but you can specify any file type you want to accept by specifying the filetype in the accept method before calling the store method. upload.accept('yml', 'zip').store('some.yml') You can also just accept all file types as well: upload.accept('*').store('some.yml') You can upload files directly by passing in a open() file: from masonite import Uploaddef upload(self, upload: Upload):upload.driver('disk').store(open('some/file.txt')) This will upload a file directly from the file system to wherever it needs to upload to. You can also specify the location you want to upload to. This will default to location specified in the config file but we can change it on the fly: upload.driver('disk').store(request.input('file_upload'), location='storage/profiles') You can use dot notation to search your driver locations. Take this configuration for example: DRIVERS = {'disk': {'location': {'uploads': 'storage/uploads','profiles': 'storage/users/profiles',}} and you can use dot notation: upload.driver('disk').store(request.input('file_upload'), location='disk.profiles') Before you get started with uploading to Amazon S3, you will need the boto3 library: terminal$ pip install boto3 Uploading to S3 is exactly the same. Simply add your username, secret key and bucket to the S3 setting: DRIVER = 's3'DRIVERS = {'disk': {'location': 'storage/uploads'},'s3': {'client': os.getenv('S3_CLIENT', 'AxJz...'),'secret': os.getenv('S3_SECRET', 'HkZj...'),'bucket': os.getenv('S3_BUCKET', 's3bucket'),}} Make sure that your user has the permission for uploading to your S3 bucket. Then in our controller: from masonite import Uploaddef upload(self, upload: Upload):upload.store(request.input('file_upload')) How the S3 driver currently works is it uploads to your file system using the disk driver, and then uploads that file to your Amazon S3 bucket. So do not get rid of the disk setting in the DRIVERS dictionary. You can also swap drivers on the fly: from masonite import Uploaddef upload(self, upload: Upload):upload.driver('s3').store(request.input('file_upload')) or you can explicitly specify the class: from masonite.drivers import UploadS3Driverfrom masonite import Uploaddef upload(self, upload: Upload):upload.driver(UploadS3Driver).store(request.input('file_upload'))
https://docs.masoniteproject.com/useful-features/uploading
2020-08-03T12:07:52
CC-MAIN-2020-34
1596439735810.18
[]
docs.masoniteproject.com
Shareware Industry Conference - Final Update So... (as almost every sentence at Microsoft starts) I'm done with the conference and back in Seattle! It was an amazing three days and I want to thank not only the organizers of the conference (Sharon, Gary, Paris, Doc, Becky, Harold, and Dan), but all the attendees as well who welcomed us with open arms and open minds. I was overwhelmed with the positive feedback we got and want to thank everyone!! The presentation part of the conference ended on Saturday afternoon with a session given by Dan Fernandez the C# Product Manager, on Visual Studio 2005 which ran double its planned length! The whole conference ended on Saturday night with the Shareware Industry Awards Foundation Awards Dinner where Microsoft won two awards ("People's Choice for Best Business Application or Utility": Office 2003, and "Best Sound Program or Utility": Windows Media Player). And finally, I got quoted in the business section of this morning's Denver Post in Kimberly Johnson's article entitled Shareware "geeks" learn marketing! I'm looking forward to next years event and moving on to start planning for the European shareware conference to be held in Brussels, Belgium the November 5th and 6th.
https://docs.microsoft.com/en-us/archive/blogs/mglehman/shareware-industry-conference-final-update
2020-08-03T11:43:26
CC-MAIN-2020-34
1596439735810.18
[]
docs.microsoft.com
Configuring syslog protocol by using protocol extensions The following steps add an user SYSLOG protocol to the Citrix ADC appliance. Import the extension file to the Citrix ADC appliance, from either a web server (using HTTP) or your local workstation. For details about importing the extension file, see Importing Extensions. import ns extension local:syslog_parser.lua syslog_parser_code Add a new user TCP-based protocol to the system by using the extension. add user protocol USER_SYSLOG -transport TCP -extension syslog_parser_syslog USER_SYSLOG 10.217.24.28 80 -defaultlb mysv
https://docs.citrix.com/en-us/citrix-adc/12-1/citrix-adc-extensions/citrix-adc-protocol-extensions/loadbalancing-syslog-messages-using-protocol-extensions/configuring-syslog-protocol-by-using-protocol-extensions.html
2020-08-03T12:52:55
CC-MAIN-2020-34
1596439735810.18
[]
docs.citrix.com
BaseDocument.ControlName Property Gets or sets the name of a control that will be passed to the current Document as content. Namespace: DevExpress.XtraBars.Docking2010.Views Assembly: DevExpress.XtraBars.v20.1.dll Declaration [DefaultValue(null)] public string ControlName { get; set; } <DefaultValue(Nothing)> Public Property ControlName As String Property Value Remarks Your may want to set your own User Controls as the Document content. The image below illustrates an application example with 3 UserControls included in the solution: Now we can run the Document Manager Designer, switch to its 'Documents' tab and click the 'Populate' button. A Document for each of the UserControls is generated automatically: Notice that each Document has its ControlName and BaseDocument()); } } This won't create any new Documents. Instead, required UserControls will be passed to corresponding Documents as their content. You can also manually create a Document that refers to a UserControl by name via the BaseView.AddDocument overload method that takes 2 string parameters.
https://docs.devexpress.com/WindowsForms/DevExpress.XtraBars.Docking2010.Views.BaseDocument.ControlName
2020-08-03T12:16:02
CC-MAIN-2020-34
1596439735810.18
[array(['/WindowsForms/images/documentmanager-controlname-hierarchy17300.png', 'DocumentManager - ControlName Hierarchy'], dtype=object) array(['/WindowsForms/images/documentmanager-controlname-populate-button17301.png', 'DocumentManager - ControlName Populate Button'], dtype=object) ]
docs.devexpress.com
What is Storage Settings? Every time you explore or query a SQL model, the whole query is run again. These operations can have low performance, put unnecessary strains on your resources, or become costly (in case of pay-per-use data warehouses like BigQuery) if the query is unoptimized. To avoid this situation, Storage Settings helps you periodically run the model's query and write the result to a physical table in your database. With Storage Settings enabled, when users explore or query from a SQL model, they will interact with the data table instead. This will give you more power to optimize query performance and cost. Storage Settings can be coupled with dependencies to create a flow-based storage schedule to ensure data freshness and consistency. Setting up Storage At any SQL model or third-party model, you can toggle on Storage Settings: The following window will popup: - Schema Name, Table Name: Choose the destination schema and name for the physical table. Note that you must have write access to the schema you chose. - Schedule: Schedule to run the storage process. - Mode: - Full: Whenever the storage process is run, the whole result set of the SQL will be used to replace the previously created table. This should be used if your data is small, the records change regularly and you do not need to retain the history of your data. - Append: This should be used if you want to retain the history of your data. When the storage is run in Append Mode, all records from the source table at run time will be appended to the destination table, and old records are left untouched. - Incremental: This should be used if your data is large, but past records do not change. New records from the model's query will be appended to the destination table. For this to work, you need to specify an increment columnso Holistics can decide on the correct data to extract. - Upsert: In case your data is large and past records change, this should be used. You will need to specify a Primary Keyand an increment column. How it works: - If the new records from the model's SQL carry new Primary Key values, those records will be appended to the table - If the new records' Primary Key values already existed, and the increment column's values are new, then the old records will be replaced. - Advanced Settings (Column Format, Indexing, Flow-based Schedule...): coming soon Updated 4 months ago
https://docs.holistics.io/docs/storage-settings
2020-08-03T12:07:25
CC-MAIN-2020-34
1596439735810.18
[array(['https://files.readme.io/670b09c-Selection_355.png', 'Selection_355.png'], dtype=object) array(['https://files.readme.io/670b09c-Selection_355.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/6dd2877-Selection_353.png', 'Selection_353.png'], dtype=object) array(['https://files.readme.io/6dd2877-Selection_353.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/e8ef8b6-Selection_354.png', 'Selection_354.png'], dtype=object) array(['https://files.readme.io/e8ef8b6-Selection_354.png', 'Click to close...'], dtype=object) ]
docs.holistics.io
Billing and subscriptions In this article you’ll learn the billing details that apply to every plan. You can use this information to create your own custom subscription. Subscription plans When you register for a Sandbox account, you get a free Sandbox plan. Later on, when you’ve tested signNow API, you can select a Professional plan per 500/1000/2000 invites or contact sales for Custom Pricing and subscribe to any amount of invites you’d like. To check how many invites have been sent and get your monthly invoice, go to API Dashboard > Plan Usage. Sandbox plan: - Free of charge - No sales pitches - Unlimited test keys - No credit card signup Sandbox plan allows you to test out every request and every endpoint in the Sandbox environment to make an informed decision. All the endpoints are available, users spend as much time as they need with unlimited amount of apps and keys for testing, SDKs, Postman collection, and an OpenAPI specification. The signatures you collect from your Sandbox account are not legally binding. Professional plan: - $84-250 per month (annual commitment) - 500, 1000, or 2000 signature invites per year - Unlimited number of apps - Legally binding e-signatures With a Sandbox account you get a ballpark figure of how many signature invites would cover your tasks. Professional plan gives you an opportunity to get legally binding signatures on every document sent. You can choose between 500, 1000, or 2000 signature invites per year under this subscription. Custom Pricing If 2000 invites per year is not enough (or maybe 500 invites is too much), contact sales managers to get a Custom plan with just the right amount of invites for your workflow. Why does the amount of invites matter? signNow offers API subscription plans based on the amount of signature invites that you send. Billing varies by the type of signature invite. Freeform invites Freeform invite - an invitation to sign a document which doesn’t contain any fillable fields. When a user sends one document in a freeform invite via API, charging is performed per each sent invite. If you send a freeform invite to several signers, it counts as several freeform invites and charging is performed per each sent out freeform invite. Field invite (Role-based invites) Field invite or Role-based invite - an invitation to sign a document which contains at least one fillable field assigned to one role. Just like with freeform invites, signNow charges per each sent field invite. However, when the user sends one document to several signers in a field invite, it still counts as one field invite. Using roles allows you to include several Signers in one Invite object. Signing link - a single-use link to a document that requires a signature. When the document is signed (or the signer declines to sign it), the link is no longer valid. When the API user generates a link to the document and shares it with a signer, charging starts only when the signer opens the document. If the signer clicks on the link and the document doesn’t open, there’s no fee. When an API user generates a signing link, there are two optional parameters to consider: - Allow signer to send invite - signers who received the link have the right to send this link to other recipients on behalf of the document owner (the API user who created the link) If this option is enabled, charging is performed per each invite sent, and how many times the document has been opened becomes irrelevant. - Inviter will guest sign signing step 1 - in case there are several roles in the document and inviter should also sign it If this option is enabled along with Allow signer to send invite, charging is performed per each sent invite. Bulk Invite Bulk invite - the invite that you can send to a list of signers. In this scenario, charging counts per each sent invite. If you send one document to multiple recipients and assign a separate field to every signer, it’s still one invite. Document group invite Document group invite is used when you want to send multiple documents to one or multiple signers. Charging is performed as usually - per each invite sent. If you add multiple signers to your document group invite, it still counts as one invite. There is no fee for Replace/Cancel/Resend invite.
https://docs.signnow.com/billing
2020-08-03T12:12:32
CC-MAIN-2020-34
1596439735810.18
[]
docs.signnow.com
Welcome to WordLift Documentation¶ The main documentation for getting started with WordLift is organized in the following sections: - User Documentation - WordLift Theme Development - WordLift Cloud - Semantics Analytics - Mappings - Export & Import of the Vocabulary - Troubleshooting - Releases - About User Documentation¶ - Overview - Getting Started - Analysis - Edit Entity - Publish - Discover - Shortcodes - Frequently Asked Questions - Who is WordLift for? - Why shall I use WordLift? - How does it work? - What are the languages supported by WordLift? - Is there a free trial? - Who owns the structured metadata created with WordLift? - What happens if I stop using WordLift? - Is WordLift Secure? - Why and how should I customize the url of the entity pages created in my vocabulary? - Why is it important to organize my content and publish it as Linked Data? - Why is WordLift innovative? - What is content enrichment? - What entity types are supported and how they map to Schema.org? - When should I create a new entity? - What are the guidelines for creating new entities to annotate a blog post or a page? - How can I search for the equivalent entity in the web of data? - Can I prevent the analysis to run? - Can I prevent WordLift from loading Wikimedia images? - What factors determine Wordlift’s rating of an entity? - I have a vocabulary term appearing several times in a page, should I link all of the occurrences to the term, or just once per page? - When should I link one entity to another? - Why do I get 404 error on pages linked by WordLift? - What are the datasets WordLift uses for named entity recognition? - What is a triple? - Are there any integrations with Neo4j? - Key Concepts
https://docs.wordlift.io/en/latest/
2020-08-03T11:19:58
CC-MAIN-2020-34
1596439735810.18
[]
docs.wordlift.io
The following sections describe how to replace the default H2 database with MS SQL: Setting up the database and users Follow the steps below to set up the Microsoft SQL database and users. Enable TCP/IP - In the start menu, click Programs and launch Microsoft SQL Server 2012. - Click Configuration Tools, and then click SQL Server Configuration Manager. - Enable TCP/IP and disable Named Pipes from protocols of your Microsoft SQL server. - Double click TCP/IP to open the TCP/IP properties window and set Listen All to Yeson the Protocol tab. On the IP Address tab, disable TCP Dynamic Ports by leaving it blank and give a valid TCP port, so that Microsoft SQL server will listen on that port. The best practice is to use port 1433, because you can use it in order processing services. - Similarly, enable TCP/IP from SQL Native Client Configuration and disable Named Pipes. Also, check whether the port is set correctly to 1433. - Restart Microsoft SQL server. Create the database and user - Open the Microsoft SQL Management Studio to create a database and user. - Click New Database from the Database menu and specify all the options to create a new database. - Click New Login from the Logins menu, and specify all the necessary options. Grant permissions Assign newly created users the required grants/permissions to log in and create tables, to insert, index, select, update and delete data in tables in the newly created database. These are the minimum set of SQL server permissions. Setting up the JDBC driver Download and copy the sqljdbc4 Microsoft SQL JDBC driver file to the WSO2 product's <PRODUCT_HOME>/repository/components/lib/ directory. Use com.microsoft.sqlserver.jdbc.SQLServerDriver asthe <driverClassName> in your datasource configuration in <PRODUCT_HOME>/repository/conf/datasources/master-datasources.xml file as explained in the next section. In WSO2 IoT Server copy the driver file to the <IOTS_HOME>/lib directory What's next By default, all WSO2 products are configured to use the embedded H2 database. To configure your product with MSSQL, see Changing to MSSQL.
https://docs.wso2.com/display/IoTS310/Setting+up+Microsoft+SQL
2020-08-03T12:04:49
CC-MAIN-2020-34
1596439735810.18
[]
docs.wso2.com
Want to work for Microsoft UK? Microsoft Services in the UK are actively recruiting.. “. Here at Microsoft U.K., we help change the way the world lives, works and plays. Explore our job opportunities and, if you like what you see, consider joining us—our people are some of the most extraordinary people you’ll ever meet. ” I’ve been at Microsoft for about 6 years now and can’t recommend it highly enough. Please let me know if you are interested.. (..or take a look at Careers at Microsoft)
https://docs.microsoft.com/en-us/archive/blogs/douggowans/want-to-work-for-microsoft-uk
2020-08-03T12:57:14
CC-MAIN-2020-34
1596439735810.18
[]
docs.microsoft.com
Create forms in Splunk Light A form is a dashboard that provides user inputs to the search. User inputs include components such as a list, a button, or a text box. A form has all the properties and behaviors of a dashboard. The image shows a dropdown box, multiselect, and checkbox inputs added to a dashboard panel to make a form. Convert a dashboard panel to a form You can create and edit a form with the Dashboard Editor. To create a form, create a dashboard panel and then add a user input component to convert it to a form. - Open the dashboard panel that you want to convert for a form. - Select Edit. - From the Add Input menu, select one or more inputs. - For each input that you add, edit the input behavior. - Click the pencil icon to open the edit window. - Click Apply to save. - (Optional) Drag the inputs to rearrange them. - (Optional) Drag an input into a panel to specify an input applicable only to that panel. For more information inputs and forms, see Create and edit forms in the Splunk Enterprise Dashboards and Visualizations manual.!
https://docs.splunk.com/Documentation/SplunkLight/7.3.6/GettingStarted/Creatingforms
2020-08-03T11:55:38
CC-MAIN-2020-34
1596439735810.18
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
When a database has outlived its use, terminate it to reduce costs and streamline operations. When you terminate a database, all data is permanently deleted and cannot be recovered. Note If you have administrator privileges in the DataStax Astra member. Procedure - Open a browser, navigate to DataStax Astra, and log in. - From the Databases page, under Actions, select the ellipsis (...) for the database you want to terminate and select Terminate. The Terminate Database window displays and indicates the estimated time required to terminate the database. WARNING All data in the database will be permanently deleted. You cannot restart the database. Proceed with caution. - To terminate the database, enter the name of the database and select Terminate Database. Results The database status changes to Terminating. The database will be terminated and removed from the list of available databases. Updated 16 days ago What's Next To continue development, create a new database or connect to another database.
https://docs.astra.datastax.com/docs/terminating-databases
2020-08-03T13:00:54
CC-MAIN-2020-34
1596439735810.18
[]
docs.astra.datastax.com
Your Bamboo Account Manager may send you a *.lic file to use to extend your Bamboo product trial. The *.lic file needs to be placed in the folder on each WFE server where the product activation DLL resides. For SharePoint 2010 or SharePoint 2013, this is typically the Global Assembly Cache (GAC). If you receive one, follow these instructions to install it.
https://docs.bamboosolutions.com/document/use_the_provided_-lic_file_to_an_extend_your_products_trial_period/
2020-08-03T11:45:03
CC-MAIN-2020-34
1596439735810.18
[]
docs.bamboosolutions.com
This hack must be required by a mod to be enabled. This hack allows mods to customise the headlight and taillight brightness for each level of the game. Requiring This Hack To require this hack, add this line to your mod's Meta.ini: RequiredHack=CustomHeadlights Your mod must provide a configuration file when requiring this hack. Configuring This Hack To configure this hack, create a file named CustomHeadlights.ini and add the parameters necessary for your mod inside it. ; [CustomHeadlights] Section: Set Headlight Brightness Per Level ; 0 to 6: Regular Levels ; 8 to 14: Bonus Levels ; [CustomTailights] Section: Set Tailight Brightness Per Level ; 0 to 6: Regular Levels ; 8 to 14: Bonus Levels [CustomHeadlights] ; Default Headlight Brightness for Main Levels 1-7 0=0 1=0 2=0.2 3=0.6 4=0.4 5=0.4 6=0.6 ; Default Headlight Brightness for Bonus Game Levels 1-7 8=1 9=1 10=1 11=1 12=1 13=1 14=1 [CustomTaillights] ; Default Taillight Brightness for Main Levels 1-7 0=0 1=0 2=0.3 3=0.4 4=0.4 5=0.4 6=0.4 ; Default Taillight Brightness for Bonus Game Levels 1-7 8=0.4 9=0.4 10=0.4 11=0.4 12=0.4 13=0.4 14=0.4 Version History 1.2 or earlier Added this hack.
https://docs.donutteam.com/docs/lucasmodlauncher/hacks/custom-headlights
2020-08-03T12:22:34
CC-MAIN-2020-34
1596439735810.18
[]
docs.donutteam.com
Migration Guide This will guide you through the changes coming from AR Foundation 3.x to 4.x. Camera and Tracking Mode Selection In previous versions of ARFoundation, it was not possible to explicitly select which camera provided the camera texture used for pass-through video, nor was it possible to indicate the desired type of tracking (e.g., 3 or 6 degrees of freedom). With ARFoundation 4, you can now - Explicitly request a particular camera. ARFoundation distinguishes between "world facing" and "user facing" cameras. On a phone, this corresponds to the rear facing and selfie cameras, respectively. - Explicitly request the type of tracking. In previous versions of ARFoundation, this was an implicit choice. For example, if you enabled face detection, you would likely get a 3dof (rotation only) session which used the user-facing camera. ARKit 3 added modes of operation that were not possible to express with previous versions of ARFoundation. In particular, ARKit 3 supports both 3dof and 6dof tracking when using the user-facing camera to perform face tracking, as well as a mode that uses the world-facing camera for the video feed but also uses the user-facing camera to perform face tracking. This table represents the face tracking support ARFoundation 3 could provide: * This was only available by enabling another feature which required the world-facing camera, such as plane detection. This table represents what ARKit3 supports: As you can see, there was no way to say "I want the user-facing camera with 6 dof tracking", and it was awkward to say "I want face tracking with the world-facing camera". Not all features are available simultaneously. For instance, on ARCore, face and plane detection are mutually exclusive features; you cannot have both at the same time. On ARKit, it depends on the hardware. However, there was no way to know what would happen if you enabled a particular combination of features. "Face tracking with the world-facing camera" requires particular hardware only available on newer iOS devices, and there was no way to query for support in a generic way. Configuration Chooser To solve this problem, we've added methods to enumerate each discrete mode of operation (called a "configuration descriptor") and list its capabilities. All capabilities in a configuration descriptor are simultaneously available on the current device. ARFoundation 4 also introduces the concept of a "configuration chooser", which uses the current set of requested features (e.g., "auto focus", "plane detection", and "face detection") and selects the best available configuration descriptor. This also allows you to query what would happen if you were to enable a particular set of features before enabling them. The default configuration chooser simply chooses the configuration that supports the most number of requested features, but you can also implement a custom configuration chooser to perform your own logic. See XRSessionSubsystem.configurationChooser in the ARSubsystems package. Adapting an existing project The breaking change here is mostly behavioral. In previous versions, face tracking tended to trump other features, so enabling face tracking would give you a 3dof user-facing camera experience. Now you must be explicit, and the default mode is 6 dof, world-facing. This means previous apps which used face tracking may not work the same as they did before. In addition to enabling face tracking, you may also need to specify the camera and tracking mode to achieve the same result. Note there is a "don't care" mode, which means tracking will not be a consideration when choosing the configuration. In ARFoundation 4, the ARCameraManager (a component on a camera in your scene) controls which hardware camera is requested, and the ARSession component controls which tracking mode is requested. There is also a new scripting API for these modes. Requested vs Current In previous versions of ARFoundation, several features had simple getters and setters. For instance, the ARCameraManager had an autoFocusMode property that you could set. This, however, did not distinguish between what had been requested and what the actual mode was. Note that it is possible that auto focus mode is not supported in the current state (ARKit's ARFaceTrackingConfiguration, for instance, does not support auto focus). ARFoundation 3 was inconsistent in its handling of such properties. Some properties only had setters, and, of those that had getters, some getters returned what had been requested, while others returned the actual value by querying the underlying platform SDK. ARFoundation 4 makes this explicit. Parameters now have "requested" and "current" properties. "Requested" properties are both gettable and settable while the "current" property is readonly. The "current" properties tell you what is actually enabled in the underlying SDK, not just what you have requested. XRCameraImage is now XRCpuImage The Texture2D(s) that represent the pass-through video are typically GPU textures. Computer vision or other CPU-side processing applications require access to the raw pixels on the CPU; however, the normal Texture2D APIs for reading pixels do not work unless the data is first transferred back from the GPU, which can be costly. Fortunately, AR frameworks like ARCore and ARKit provide a way to access the raw pixel data on the CPU without the costly GPU readback, and ARFoundation provided this data as an XRCameraImage. This "camera image" API still exists, but it has been generalized and renamed from XRCameraImage to XRCpuImage. The API is very similar, but it can now be used to read other types of image data, such as the human depth and human stencil buffers provided by the AROcclusionManager. Changes XRCameraImageis now XRCpuImage XRCameraImage.FormatSupportedwas a static method which determined whether the camera image could be converted to a given TextureFormat. Since XRCpuImagecan handle different types of images, that method is now an instance method on the XRCpuImage. ARCameraManager.TryGetLatestImage(out XRCameraImage)is now ARCameraManager.TryAcquireLatestCpuImage(out XRCpuImage). Note the parameter change from XRCameraImageto XRCpuImageand the method name more accurately describes the lifetime and caller responsibility of the XRCpuImageresource (though its behavior remains unchanged). - The AROcclusionManagerhas two new methods: TryAcquireHumanStencilCpuImage(out XRCpuImage) TryAcquireHumanDepthCpuImage(out XRCpuImage) Sample The "CpuImages" sample from the ARFoundation Samples GitHub repo shows how to use the cpu image API and has been updated to include the color camera image and the human depth and human stencil images. XR Plug-in Management ARFoundation now depends on XR Plug-in Management. This affects both edit and runtime: Edit time setup Provider plugins must be enabled before AR Foundation can use them. XR Plug-in Management provides a UI (Project Settings > XR Plug-in Management) to enable specific plug-in providers for each target platform: Runtime In previous versions of ARFoundation, each manager-component (e.g., ARSession, ARPlaneManager, ARPointCloudManager, etc.) fully controlled the lifecycle of each subsystem (a subsystem is the platform agnostic interface implemented by each provider package). In ARFoundation 4, XR Plug-in Management controls the creation and destruction of the subsystems. ARFoundation's components still start and stop the subsystems, but do not create or destroy them: This means, for instance, destroying an ARSession component pauses but does not destroy the session. This can be desirable when, for example, switching between two AR scenes. However, it is a change from the previous behavior. Destroying an ARSession and recreating it will pause and then resume the same session. If you want to completely destroy the session, you need to destroy the subsystem. This means calling Deinitialize on an XRLoader from XR Plug-in Management. There is a new utility in ARFoundation to facilitate this: LoaderUtility. Simply call LoaderUtility.Initialize(); or LoaderUtility.Deinitialize(); to create or destroy subsystems, respectively.
https://docs.unity3d.com/Packages/[email protected]/manual/migration-guide-3.html
2020-08-03T11:45:56
CC-MAIN-2020-34
1596439735810.18
[array(['images/enable-arcore-plugin.png', 'XR Plug-in Management XR Plug-in Management'], dtype=object)]
docs.unity3d.com
Declare custom REST endpoints You can extend Release by creating new endpoints backed by Jython scripts. You can use this feature, for example, to integrate with other systems. To declare new endpoints, add a file called xl-rest-endpoints.xml in the classpath of your Release server. This file can be in the JAR file of a custom plugin or in the XL_RELEASE_SERVER_HOME/ext directory. For example: <?xml version="1.0" encoding="UTF-8"?> <endpoints xmlns: <endpoint path="/test/demo" method="GET" script="demo.py" /> <!-- ... more endpoints can be declared in the same way ... --> </endpoints> After processing this file, Release and produce a response. Objects available in the context In a script, you have access to Release services and to the following objects: - Request: JythonRequest - Response: JythonResponse For more information, see.
https://docs.xebialabs.com/v.9.7/release/how-to/declare-custom-rest-endpoints/
2020-08-03T12:56:55
CC-MAIN-2020-34
1596439735810.18
[]
docs.xebialabs.com
Display Name on a Per-campaign Basis This functionality enables you to specify a Display Name (in addition to CPN Digits) when dialing calls in an outbound campaign. The Display Name can be set at the level of the OCS application, Campaign Group, or individual record or chain of records. Starting in release 8.1.1, OCS can use the CPNDisplayName configuration option to specify the name to be displayed. When dialing with SIP Server, this value is passed to SIP Server as the DisplayName parameter in the AttributeExtensions. When dialing with CPD Server in HMP transfer mode, this option is supported only if the CPD Server option tscall is set to true/yes. For an individual record, or for a chain of records, this option can be set using the set_flex_attr custom action of the SCXML treatment. See Setting Options for Individual Records or Chain of Records for more information about custom actions. Feedback Comment on this article:
https://docs.genesys.com/Documentation/OU/latest/Dep/DisplayNamePercampaign
2019-08-17T14:53:38
CC-MAIN-2019-35
1566027313428.28
[]
docs.genesys.com
Before following these steps: Please make sure you have migrated your account to SOS via the following link - In order to complete the migration process, you will need to do the following: - Uninstall Malwarebytes Secure Backup - Download and install OBRM Uninstalling Malwarebytes Secure Backup: Windows Vista/7 - Open Programs and Features by clicking the Start button in the lower left, clicking Control Panel, clicking Programs, and then clicking Programs and Features. - Select Malwarebytes Secure Backup, and then click Uninstall. If you are prompted for an administrator password or confirmation, type the password or provide confirmation. - Follow the instructions on the Screen. on Malwarebytes Secure Backup, and then tap or click Uninstall. - Follow the instructions on the screen. Windows 10 - On the Start menu select Settings. - In Settings, select System > Apps andamp; features. - Select Malwarebytes Secure Backup, and then select Uninstall. - Follow the directions on the screen Installing OBRM - [Download](){:target=\_blank} and install OBRM. - The software may ask you to sign up for a free trial. If it does, click on Log in as User in the lower left. - Log in with the same information you used to log in to Malwarebytes Secure Backup.
https://docs.infrascale.com/changing-over-from-malwarebytes-secure-backup-to-cloud-backup
2019-08-17T15:12:36
CC-MAIN-2019-35
1566027313428.28
[array(['https://dtv98j71ej9a1.cloudfront.net/Sites/Infrascale/imagebsn/sos1.png', None], dtype=object) array(['https://dtv98j71ej9a1.cloudfront.net/Sites/Infrascale/imagebsn/sos2.png', None], dtype=object) array(['https://dtv98j71ej9a1.cloudfront.net/Sites/Infrascale/imagebsn/sos3.png', None], dtype=object) array(['https://dtv98j71ej9a1.cloudfront.net/Sites/Infrascale/imagebsn/sos4.png', None], dtype=object) ]
docs.infrascale.com
Type] Parameters Note. Remarks. Examplesand with the .dll extension. MyModule.dll\1mustinto); See Also Reference .NET Framework Tools Type Library Exporter (Tlbexp.exe) MSIL Disassembler (Ildasm.exe) Strong Name Tool (Sn.exe) SDK Command Prompt Concepts Importing a Type Library as an Assembly Strong-Named Assemblies Attributes for Importing Type Libraries into Interop Assemblies Other Resources Type Library to Assembly Conversion Summary
https://docs.microsoft.com/en-us/previous-versions/dotnet/netframework-2.0/tt0cf3sx(v%3Dvs.80)
2019-08-17T15:41:16
CC-MAIN-2019-35
1566027313428.28
[]
docs.microsoft.com
Using the application Via the user interface To launch the application, use the Screenshot item in the Accessories category of Xfce's main menu. Take a screenshot Region to capture The Region to capture section allows you to set what the screenshot will be taken of: - Entire screen - Takes a screenshot of the whole screen as you see it. - Active window - Takes a screenshot of the active window. This will be the one that was active before this dialog appeared, or if you set a delay, the one that is active after the delay. - Select a region - Allows you to select a region to be captured by clicking and dragging a rectangle over the area of screen that you wish to capture, before releasing the mouse button. You can also press Ctrl while dragging to move the rectangle. - Capturing the pointer - The Capture the mouse pointer option allows you to select whether or not the screenshot will include the mouse pointer. Delay before capturing The Delay before capturing section allows you to set the delay that will elapse between pressing the OK button and screenshot being taken. This delay will allow you to open menus or to perform whatever action you require to see displayed in the screenshot. After capturing After pressing the OK button a second window will be displayed: Preview The Preview section displays a thumbnail of the screenshot. Action The Action section allows you to choose what should be performed on the screenshot. - Save - The Save option will save the screenshot to a PNG file. A save dialog will be displayed. You will be able to set the save location, and the name of the file. - Xfce4 Screenshooter is also able to save the screenshot to any remote file system supported by GVfs, such as FTP, SAMBA, SFTP, remote computers accessible via SSH… You just need to connect this remote file system using gvfs-connect or Gigolo and it will be available in the left column of the save dialog. - Copy to the clipboard - The Copy to the clipboard option allows you to paste the screenshot in another application, such as a word processor. This option is only available when a clipboard manager is running. - Open with - The Open with option saves the screenshot to the system's temporary directory and opens it with the application chosen from the drop-down list. Applications which support images are automatically detected and added to the drop-down list. - Host on Imgur - The Host on Imgur option allow you to host your screenshot on this free online hosting service, so that you can share it easily with other people. Imgur automatically generates a tiny, medium, and full-size image of your screenshot, which can be used to create thumbnails pointing to the full size screenshot. Host on Imgur After selecting Host on Imgur and pressing the OK button, you will be shown this dialog: The dialog below will give you the links to the full size screenshot, the large thumbnail, the small thumbnails, as well examples of HTML, Markdown and BBcode to create a thumbnail pointing to the full size screenshot: The dialog below will give you the links to delete the image off of the Imgur website. The link will only be shown once. Make sure to save it if you think you might be deleting this image: Linking images of Imgure accounts is not currently supported. Via command-line Command line options allow you to take screenshots quickly. They also allow you to configure the Print Screen key on the upper right of most keyboards so that it takes screenshots. To do so, configure the key-bindings of your desktop environment so that it launches xfce4-screenshooter with one or several of the following options when the Prt Scrn key is pressed. The command line options - The -w option - The -w option allows you to take a screenshot of the active window. - The -f option - The -f option allows you to take a screenshot of the entire screen. - The -r option - The -r option allows you to select a region to be captured by clicking and dragging a rectangle over the area of screen that you wish to capture, before releasing the mouse button. - The -d option - The -d option followed by a positive integer allows you to set the delay before taking the screenshot when the -w, the -f or the -r option is given. - The -s option - The -s option followed by the path to an existing folder allows you to set where the screenshots are saved. This option only has an effect if the -w, the -f or the -r option is given. - The -o option - If the -o option is given, followed by an application name, the screenshot will be saved to the system's temporary directory and opened with the application whose name is to be given after -o. This option only has an effect if the -w, the -f or the -r option is given. - The -u option - If the -u option is given, the screenshot will be hosted on Imgur. See above for more details. This option only has an effect if the -w, the -f or the -r option is given.
https://docs.xfce.org/apps/screenshooter/usage
2019-08-17T15:37:18
CC-MAIN-2019-35
1566027313428.28
[]
docs.xfce.org
May 28, 2016 If you experience difficulties when searching for recordings using the Session Recording Player, the following error messages may appear on the screen: Search for recorded session files failed. The remote server name could not be resolved: servername. where servername is the name of the server to which the Session Recording Player is attempting to connect. The Session Recording Player cannot contact the Session Recording Server. Two possible reasons for this are an incorrectly typed server name or the DNS cannot resolve the server name. Resolution: From the Player menu bar, choose Tools > Options > Connections and verify that the server name in the Session Recording Servers list is correct. If it is correct, from a command prompt, run the ping command to see if the name can be resolved. When the Session Recording Server is down or offline, the search for recorded session files failed error message is Unable to contact the remote server. Unable to contact the remote server. This error occurs when the Session Recording Server is down or offline. Resolution: Verify that the Session Recording Server is connected. Access denied error. An access denied error can occur if the user was not given permission to search for and download recorded session files. Resolution: Assign the user to the Player role using the Session Recording Authorization Console. Search for recorded session files failed. The underlying connection was closed. Could not establish a trust relationship for the SSL/TLS secure channel. This exception is caused by the Session Recording Server using a certificate that is signed by a CA that the client device does not trust or have a CA certificate for. Resolution: Install the correct or trusted CA certificate workstation where the Session Recording Player is installed. The remote server returned an error: (403) forbidden. This error is a standard HTTPS error that occurs when you attempt to connect using HTTP (nonsecure protocol). The server rejects the connection because, by default, it is configured to accept only secure connections. Resolution: From the Session Recording Player menu bar, choose Tools > Options > Connections. Select the server from the Session Recordings Servers list, then click Modify. Change the protocol from HTTP to HTTPS. Troubleshoot MSMQTroubleshoot MSMQ If your users see the notification message but the viewer cannot find the recordings after performing a search in the Session Recording Player, there could be a problem with MSMQ. Verify that the queue is connected to the Session Recording Server (Storage Manager) and use a Web browser to test for connection errors (if you are using HTTP or HTTPS as your MSMQ communication protocol). To verify that the queue is connected: Log on to the server hosting the Session Recording Agent and view the outgoing queues. Verify that the queue to the computer hosting the Session Recording Server has a connected state. - If the state is “waiting to connect,” there are a number of messages in the queue, and the protocol is HTTP or HTTPS (corresponding to the protocol selected in the Connections tab in the Session Recording Agent Properties dialog box), perform Step 3. - If state is “connected” and there are no messages in the queue, there may be a problem with the server hosting the Session Recording Server. Skip Step 3 and perform Step 4. If there are a number of messages in the queue, launch a Web browser and type the following address: - For HTTPS:, where servername is the name of the computer hosting the Session Recording Server - For HTTP:, where servername is the name of the computer hosting the Session Recording Server If the page returns an error such as The server only accepts secure connections, change the MSMQ protocol listed in the Session Recording Agent Properties dialog box to HTTPS. Otherwise, if the page reports a problem with the Web site’s security certificate, there may be a problem with a trust relationship for the SSL/TLS secure channel. In that case, install the correct CA certificate or use a CA that is trusted. If there are no messages in the queue, log on to the computer hosting the Session Recording Server and view private queues. Select citrixsmauddata. If there are a number of messages in the queue (Number of Messages Column), verify that the Session Recording StorageManager service is started. If it is not, restart the service.
https://docs.citrix.com/en-us/xenapp-and-xendesktop/7-6-long-term-service-release/xad-monitor-article/xad-session-recording/xad-sr-trouble/xad-sr-trouble-play-search.html
2019-08-17T16:03:25
CC-MAIN-2019-35
1566027313428.28
[]
docs.citrix.com
Schedule and manage a followup call Agents using Agent Voice Portal may have the option to schedule a followup call with a consumer at a later date. This functionality, when enabled, is accessed from the Manual Dial screen. To schedule a followup call while in the Talking state: - While in a talking state, click the Manual icon. - In the Device field, enter or select the device to be used for the followup call. - From the Time Zone drop down list, elect the appropriate time zone. Note that time zones default to the time zone of the individual agent. - From the Contact Time drop downs, select the hour (military time, 0-23) and the minutes (in five minute increments). - From the Date drop down, select the date on which the call is to be made (default is "Today"). - Click Schedule Call. - Click Schedule Call. The Manual screen appears Note that the agent can schedule the call only as far into the future as is defined in the Agent Group (default is 14 days, maximum is 30 days). A confirmation dialog appears. The agent is returned to the Manual screen where they can schedule another call, return to the Call Information screen or Disposition Code screen to end the call. To schedule a followup call while in the After Call Work state: - While in the After Call Work state, click the Submit and Followup button. - Schedule a followup call to the client record appearing in the Device field or enter the device for the call. - Continue, following the steps outlined above. The Manual screen appears in the Followup state. Reschedule or Delete Scheduled Followups Agents can reschedule or delete (cancel) followups assigned to them using the Scheduled Followups link. When the agent searches for scheduled followups, only the followups assigned to that agent will display. The agent can then edit the date, time, or time zone of the followup or cancel the followup. In some cases, it may not be possible to edit or delete a scheduled followup, especially one that is due very soon. In this case, the agent will see the error message “This followup cannot be rescheduled.” More about Scheduled Followups Learn more about Scheduled Followups in the following sections of the Help: Feedback Comment on this article:
https://docs.genesys.com/Documentation/EGAG/latest/EGAGhelp/Followups
2019-08-17T15:42:54
CC-MAIN-2019-35
1566027313428.28
[]
docs.genesys.com
ODBC contains release-independent directories which are denoted by ODBC_32 and ODBC_64. Their sub-directories are symbolic links to the respective 32-bit and 64-bit Teradata Tools and Utilities release directories installed during the installation of the current ODBC. ODBC_32 and ODBC_64 contain their respective odbc.ini and odbcinst.ini files which are free of any use of a Teradata Tools and Utilities release path. It is required that these templates be used when setting up the odbc.ini with the desired DSN settings. Otherwise, the user will be continuously updating the .ini files when different Teradata Tools and Utilities releases are installed. In particular, the path values in the odbc.ini template for InstallDir and Driver must be the ones defined in the template; they cannot be modified. Whenever a new Teradata Tools and Utilities release is installed, the sub-directories are updated with the new symbolic links. This release will be the Active TTU of the system. See MultiVersion Support for more information. The following table shows the release-independent directories. On UNIX or Linux systems, the Teradata Tools and Utilities installation includes an option to uninstall previous versions of existing Teradata Tools and Utilities software. Any Teradata Tools and Utilities release version prior to 15.10.01 must be uninstalled before installing ODBC Driver for Teradata. This does not apply to efixes, because efixes are installed as an upgrade. For information about installing operating system-specific utilities, see the documentation listed on. Depending on the Teradata Tools and Utilities version installed, the actual value represented by <ttu version> might display as the version number.
https://docs.teradata.com/reader/pk_W2JQRhJlg28QI7p0n8Q/7M~sCJ1enKO~NUbN77mv2w
2019-08-17T14:49:08
CC-MAIN-2019-35
1566027313428.28
[]
docs.teradata.com
Miscellaneous¶ Apache HttpClient¶ This doesn’t respect JVM system properties for things such as the proxy and truststore settings. Therefore when you build one you would need to: HttpClient httpClient = HttpClients.createSystem(); // or HttpClient httpClient = HttpClientBuilder.create().useSystemProperties().build(); Or on older versions you may need to: HttpClient httpClient = new SystemDefaultHttpClient(); In addition, Hoverfly should be initialized before Apache HttpClient to ensure that the relevant JVM system properties are set before they are used by Apache library to configure the HttpClient. There are several options to achieve this: - Use @ClassRule and it guarantees that HoverflyRule is executed at the very start and end of the test case - If using @Rule is inevitable, you should initialize the HttpClient inside your @Before setUp method which will be executed after @Rule - As a last resort, you may want to manually configured Apache HttpClient to use custom proxy or SSL context, please check out HttpClient examples Legacy Schema Migration¶ If you have recorded data in the legacy schema generated before hoverfly-junit v0.1.9, you will need to run the following commands using Hoverfly to migrate to the new schema: $ hoverctl start $ hoverctl delete simulations $ hoverctl import --v1 path-to-my-json/file.json $ hoverctl export path-to-my-json/file.json $ hoverctl stop
https://hoverfly-java.readthedocs.io/en/0.3.8/pages/misc/misc.html
2019-08-17T15:38:36
CC-MAIN-2019-35
1566027313428.28
[]
hoverfly-java.readthedocs.io
How to configure Django settings¶ It is important to understand that in Divio Cloud projects, some settings need to be inspected and manipulate programatically, to allow the addons system to handle configuration automatically. See the Settings that are configured by addons section for more on this. This can entail a little extra work when you need to change settings yourself, but the huge convenience it offers is more than worth the effort. The correct way to manage settings such as INSTALLED_APPS is to manipulate the existing value, after having loaded the settings from the addons with aldryn_addons.settings.load(locals()). For example, in the default settings.py you will find: import aldryn_addons.settings aldryn_addons.settings.load(locals()) INSTALLED_APPS.extend([ # add your project specific apps here ]) This allows you to add items to INSTALLED_APPS without overwriting existing items, by manipulating the list. You will need to do the same for other configured settings, which will include: MIDDLEWARE(or the older MIDDLEWARE_CLASSES) TEMPLATES(or the older TEMPLATE_CONTEXT_PROCESSORS, TEMPLATE_DEBUGand other template settings) - application-specific settings, for example that belong to django CMS or Wagtail. See each application’s Addon configuration with aldryn_config.py for the settings it will configure. Inserting an item at a particular position¶ Sometimes it’s not enough just to add an application or class to a list. It may need to be added before another item. Say you need to add your application security just before cms. In this case you can target cms in the list like this: INSTALLED_APPS.insert( INSTALLED_APPS.index("cms") + 0, "security" ) ( + 0 will insert the new item "security" immediately before "cms" in the list). Of course you can use Python to manipulate the collections in any way you require. Manipulating more complex settings¶ Note that in the case of more complex settings, like TEMPLATES, which is no longer a simple list, you can’t just extend them directly with new items, you’ll need to dive into them to target the right list in the right dictionary, for example: TEMPLATES[0]["OPTIONS"]["context_processors"].append('my_application.some_context_processor') Listing applied settings¶ The Django diffsettings management command will show the differences between your settings and Django’s defaults, for example with: docker-compose run --rm web python manage.py diffsettings In some projects (with addons that manipulate settings late in the start-up process), you may get an error: RuntimeError: dictionary changed size during iteration. In this case you can run a script to print out your settings: from django.conf import settings settings.configure() django_settings = {} for attr in dir(settings): value = getattr(settings, attr) django_settings[attr] = value for key, value in django_settings.items(): if not key.startswith('_'): print('%s = %r' % (key, value))
http://docs.divio.com/en/latest/how-to/configure-settings.html
2019-08-17T15:10:13
CC-MAIN-2019-35
1566027313428.28
[]
docs.divio.com
Constructor new Font2dDesc(advanceDir, flowDir, tileSize, qualityopt, minAlphaopt) Parameters: advanceDirection :lumin.glyph.AdvanceDirection Direction to the next glyph along the baseline. Type: (static, constant) DEFAULT :lumin.glyph.Font2dDesc Type: flowDirection :lumin.text.FlowDirection Direction from one line of text to the next. Type: minAlphaToDiscard :number Control the minimum alpha value rendered in each glyph of the font. Values above 0.2 will cause aliasing to display around the edges of some fonts. Depending on the use case, however, this may be acceptable in return for improved blending of overlapping letters (e.g., as is the norm in a cursive font). Values much below 0.1, however, can lead to visible edges from overlapping glyphs in the rendered text. Type: - number quality :lumin.glyph.Quality Control the quality of the rendered text. Note kStd does not require glyph pre-processing of fonts. The other quality levels require the font file to be pre-processed (e.g., with the "prefont" tool; see /tools/prefont/). In that case, the above tileSize field must match that of the pre-processed resources, or the Font2dResource will not load correctly. Type: tileSize :number The size of all glyph images are given by this parameter; e.g., 32, which implies each glyph will occupy a 32x32 region of a glyph sheet. Type: - number
https://docs.magicscript.org/api_1.4.0/lumin.glyph.Font2dDesc.html
2019-08-17T14:44:21
CC-MAIN-2019-35
1566027313428.28
[]
docs.magicscript.org
Class humhub\widgets\ModalConfirm ModalConfirmWidget shows a confirm modal before calling an action After successful confirmation this widget returns the response of the called action. So be ensure to write an workflow for that inside your controller action. (for example: close modal, reload page etc.) Public Properties Hide inherited properties Public Methods Property Details Button name for canceling Button name for confirming Contains optional JavaScript code to execute, after user clicked the TrueButton By default (when it remains empty), the modal content will be replaced with the content from $linkHref Classes for the displaying link Content for the displaying link Original path to view Define the output element Tooltip text Message to show Contains optional JavaScript code to execute after modal has been made visible to the user Style for the displaying link Title to show Message to show
http://docs.humhub.org/humhub-widgets-modalconfirm.html
2019-08-17T15:31:23
CC-MAIN-2019-35
1566027313428.28
[]
docs.humhub.org
Welcome to the AWS Directory Service API Reference AWS Directory Service is a web service that makes it easy for you to setup and run directories in the AWS cloud, or connect your AWS resources with an existing on-premises Microsoft Active Directory. This guide provides detailed information about AWS Directory Service operations, data types, parameters, and errors. For information about AWS Directory Service features, see AWS Directory Service and the AWS Directory Service Administration Guide. Note AWS provides SDKs that consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .Net, iOS, Android, etc.). The SDKs provide a convenient way to create programmatic access to AWS Directory Service and other AWS services. For more information about the AWS SDKs, including how to download and install them, see Tools for Amazon Web Services.
https://docs.aws.amazon.com/directoryservice/latest/devguide/welcome.html
2019-07-16T02:53:24
CC-MAIN-2019-30
1563195524475.48
[]
docs.aws.amazon.com
Disclaimer¶ - This wiki is based on the Ansible-playbook repository. Please review The Requirements before installing. - Installar is meant to be installed on a clean OS. I do not take any responsibility for having installed this on a system with already existing software and/or mission critical services. - You are responsible for the security of your server. This includes using strong passwords and storing them safely. This wiki includes information on security hardening. Please make sure you follow the steps in ref:securityHardening to improve your full node’s security. - Refer to Appendix for extra configuration options for your full node. - You are responsibility for add-on software such as Nelson and Field, support which can be provided on the respective channels on IOTA’s Discord. - I am not associated with the IOTA foundation. I am simply an enthusiastic community member.
https://iri-playbook.readthedocs.io/en/master/disclaimer.html
2019-07-16T03:00:46
CC-MAIN-2019-30
1563195524475.48
[]
iri-playbook.readthedocs.io
The In-House Report displays guest, booking and folio details for all guest bookings that are "in-house" on a specific date. This means that you can choose a date in the past to see what the In-house guest balances were on that particular date. The report also gives the option to display guests that are active and currently in-house regardless of the date chosen. The In-House Report is useful for seeing a summary of balances due and can be used in conjunction with Managers and Accounting Reports. For example, Use the In-house Report with the Transactions Report to help account for the difference between any amounts charged and payments received on a specific date in the past. This is typically due to things like advanced deposits made, payment of a booking at check in and advanced charge made to a future booking. These transactions often occur on different dates than the actual room rent for the booking is charged. To see a report of guest, booking and folio details for all In-house (active) guests, see the In-House Guest Ledger The In-House Report displays the following information: Advance Room Payment / Charge
https://docs.bookingcenter.com/display/MYPMS/In-House
2019-07-16T02:12:50
CC-MAIN-2019-30
1563195524475.48
[]
docs.bookingcenter.com
Using jconsole (JMX) with SSL encryption Using jconsole with SSL encryption. Using jconsole with SSL requires the same JMX changes to cassandra-env.sh as with nodetool. See using nodetool (JMX) with SSL encryption. You do not need to create nodetool-ssl.properties, but you must use the same JVM keystore and truststore options on the Jconsole command line. Prepare SSL certificates with a self-signed CA for production, or prepare SSL certificates for development. Additionally, configure client-to-node encryption. Prerequisites Procedure - Copy the keystore and truststore files created in the prerequisite to the system where jconsole is launched.launches a session with the node. If connecting to a remote node, enter the hostname and JMX port in Remote Process. If using authentication, enter the username and password.
https://docs.datastax.com/en/ddacsecurity/doc/ddacsecurity/secureDDACJconsoleSSL.html
2019-07-16T02:28:20
CC-MAIN-2019-30
1563195524475.48
[]
docs.datastax.com
Offer Additional Services In addition to the hosting services and features provided by your plan, you can expand the offering by using the following means: Install third-party applications packaged as Plesk extensions and include the services they provide into your hosting plans. When such an extension is installed, the service provided by it is registered in Plesk and is made available for inclusion into hosting plans by the server administrator and resellers: The option corresponding to the new service is listed in hosting plan properties, on the Additional Services tab. Add custom options to plans. If you, for example, run an online support service at, and want to include the support option into a service plan, you should set up a custom plan option: - Go to Service Plans > Additional Services > Add Service. - Specify service name ( Premium support), service description, and select the option to place a button to Control Panel with the link to the online service (). After this is done, a new tab called Additional Services appears in hosting plan settings. It shows your Premium support option which you or your resellers can select for provisioning to customers. To add a service provided by an application packaged as an extension: Install the extension according to the instructions provided in the Deployment Guide, chapter Installing Plesk Extensions, or use the instructions provided by the extension packager. To add a service as a custom plan option: - Go to Service Plans > Additional Services tab. - Click Add Service. - Specify the following: - Service name. - Service description. - Use custom button for the service. Select this checkbox to place a hyperlink to your online service or a web application to subscriber’s Control Panel. - URL attached to the button. Specify the Internet address where the user should be directed after clicking the button. For example:. - Background image for the button. If you do not select an image, the Plesk will use the default image . - Open URL in Plesk. Leave this checkbox cleared if you want the external web resource to open in a new browser window or tab. - If you want Plesk to send the customer and subscription information with the HTTP request, specify what information should be included: - Subscription ID. - Primary domain name associated with a subscription. - FTP account username and password. - Customer’s account ID, name, e-mail, and company name. - Click OK. If you do not want to let your resellers use an additional service and provision it to their customers: - Go to Service Plans > Additional Services tab. - Select a checkbox corresponding to the service and click Make Unavailable. To let resellers use an additional service and provision it to their customers: - Go to Service Plans > Additional Services tab. - Select a checkbox corresponding to the service and click Make Available. To remove a custom plan option from service plan properties: - Go to Service Plans > Additional Services tab. - Select a checkbox corresponding to the service and click Remove Service. To remove an additional service provided by an extension: Remove the extension from Plesk.
https://docs.plesk.com/en-US/onyx/administrator-guide/customers-and-resellers/hosting-plans-and-subscriptions/setting-up-hosting-plans/offer-additional-services.70726/
2019-07-16T02:32:06
CC-MAIN-2019-30
1563195524475.48
[]
docs.plesk.com
Contents Now Platform Capabilities Previous Topic Next Topic Planned Maintenance widget Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Planned Maintenance widget Describes any planned system maintenance. The widget gathers information from the cmdb_ci_outage table. Any planned maintenance within the following five days appears in the Planned Maintenance widget. Figure 1. Planned Maintenance widget Instance options The Planned Maintenance widget does not have any included instance options. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/build/service-portal/concept/planned-maintenance-widget.html
2019-07-16T02:51:56
CC-MAIN-2019-30
1563195524475.48
[]
docs.servicenow.com
An Act to renumber 46.536; to amend 20.435 (5) (cf); and to create 46.536 (2) of the statutes; Relating to: dementia training grants for mobile crisis teams and making an appropriation. (FE) Bill Text (PDF: ) Fiscal Estimates Wisconsin Ethics Commission information 2015 Assembly Bill 790 - Enacted into Law
http://docs-preview.legis.wisconsin.gov/2015/proposals/sb694
2019-07-16T02:09:22
CC-MAIN-2019-30
1563195524475.48
[]
docs-preview.legis.wisconsin.gov
Attributemethod.()); ... } }
https://docs.huihoo.com/java/ee/javaeetutorial5/doc/Servlets10.html
2019-07-16T02:25:08
CC-MAIN-2019-30
1563195524475.48
[]
docs.huihoo.com
Contents Customer Service Management Previous Topic Next Topic Edit your community profile display settings Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Edit your community profile display settings Edit your display settings to determine who can view each profile section. Before you beginRole required: sn_communities.community_user, sn_communities.admin or sn_communities.moderation_admin About this taskYou can edit your own display settings for your community profile. If a community administrator finds inappropriate content on your profile, they can edit your display settings to hide the content from the community until it is corrected. Procedure On the community homepage, click the Community menu and then click Community Profile. Click the ... icon and then click Display Settings. Open the choice lists and select display settings for any section. Choose the privacy level to apply. Table 1. Privacy levels Privacy level Description Everyone Visible to all users, including non-logged in users. Only me Visible to yourself and administrators. If an administrator finds inappropriate content on your profile, the administrator can change the display settings to Only me until the content is modified.You receive an email notification detailing the display settings that were modified for which sections. Update the content and adjust the display settings accordingly. Everyone after logging in Visible to all users who access the community after logging in. Followers Visible to users who are following your profile. Click Save Settings. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/kingston-customer-service-management/page/product/customer-communities/task/manage-your-display-settings.html
2019-07-16T02:47:47
CC-MAIN-2019-30
1563195524475.48
[]
docs.servicenow.com
tcPDA¶ Contents - tcPDA - Data loading - Fitting data - Defining the correction factors - The fit parameter table - Performing the fit - Options and settings - Using prior information - Estimating confidence intervals - Global analysis using two-color data sets - Considerations for fit performance tcPDA (three-color photon distribution analysis) is the module for quantitative analysis of three-color FRET data to extract the underlying distance distributions, similar to the PDAFit module for two-color FRET data. It implements the algorithms described in [Barth2018]. Data loading¶ From BurstBrowser¶ To process and export burst data from the BurstBrowser module to tcPDA, use the PDA export functionality in BurstBrowser. Load data (.tcpda files) by clicking on the folder icon in the top left. From text file¶ As an alternative to the MATLAB-based .tcpda file format, it is also possible to load text-based data into the tcPDA program. The text-based file format is a comma-separated file containing the time bin size in milliseconds and a list of the photon counts per time bin for all detection channels \(N_{BB}, N_{BG}, N_{BR}, N_{GG}, N_{GR}\). The basic structure of the text-based file format with ending .txt is given by the following template. # place for comments timebin = 1; # timebin in ms NBB, NBG, NBR, NGG, NGR 46,54,16,95,28 35,60,17,113,46 38,58,14,82,33 67,51,10,94,23 20,32,6,36,13 32,35,9,46,23 7,37,20,50,14 ... ... As before, load it using the folder icon in the top left, and select “Text-based tcPDA file” in the file dialog. Fitting data¶ Left: The Fit Table tab contains options for the fit routine and model functions. Right: The Corrections tab allows to set the correction factors. Defining the correction factors¶ The correction factors can be set in the Corrections tab. If data has been exported from the BurstBrowser module, the values for crosstalk (ct), gamma-factors (gamma), Förster radii (R0) and background counts in kHz (BG) will be automatically transferred to tcPDA and need not be set. However, the value for the direct excitation probabilities (de) is different in tcPDA compared to the intensity-based corrections to photon counts in BurstBrowser (see also the respective section in the manual for PDAFit). The direct excitation probability of dye \(X\) after excitation by laser \(Y\), where \(X,Y \in \{B,G,R\}\), is denoted by \(p_{de}^{XY}\) (called de in the GUI). It is defined based on the extinction coefficients \(\varepsilon_X^{\lambda_Y}\): The other parameters in the Corrections tab are explained in the further sections. The fit parameter table¶ The fit parameter table allows to set the initial values for the parameters. It adapts to the selected number of species, whereby the color of the Amplitude parameter indicates the color of the species in the plots. Check the F option to fix parameters. Lower and upper boundaries (LB and UB) are by default set to 0 and infinity, but can be specified if needed. The other columns are related to the usage of prior information and are described in the respective section. Note The amplitudes are normalized by the fit routine, i.e. \(\sum_i A_i = 1\). To reduce the number of free fit parameters, the amplitude of the first species is fixed by default, but can be unfixed by the user. Performing the fit¶ Start the fit by clicking the Fit button in the Fit Table tab. The fit options are described in detail here. To simply view the theoretical histograms of the current model, click the View Curve button. It is usually best to perform the fit of the three-color FRET histogram in three steps of increasing complexity. Since alternating excitation of the blue, green and red lasers is employed, the direct excitation of the green dye is essentially equivalent to performing a two-color FRET experiment, which can be fit first (1D GR tab). Then, one can fit the distribution of the proximity ratios BG and BR while keeping the center value and standard deviation for the distance \(R_{GR}\) fixed (2D BG BR tab). Finally, the full three-dimensional distribution of the proximity ratio GR, BG and BR can be fit by setting all parameters free (3D tab). The following sections describe the individual steps. The Fit Table tab applies to all three tabs (1D GR, 2D BG BR and 3D) in the main window and adapts functionality based on which main window is selected. Fitting the histogram of proximity ratio GR¶ The 1D GR tab allows fitting of the one-dimensional distribution of the proximity ratio GR, disregarding the available three-color information. It is useful to pre-fit the values for \(R_{GR}\) and \(\sigma_{GR}\) and get a first idea of the number of populations required to describe the data. Only the amplitudes and parameters \(R_{GR}\) and \(\sigma_{GR}\) are fit. Fitting of the one-dimensional histogram of proximity ratio GR is always performed using Monte Carlo simulation of the histogram with the Simplex fit algorithm, regardless of the fit options set in the Fit Table tab. The settings Use MLE, Stochastic Labeling and Fit Method thus have no effect. Fitting the two-dimensional histogram of proximity ratios BG and BR¶ Fitting the two-dimensional histogram of the proximity ratios BG and BR. Left: Colormap-representation of the weighted residuals. Right: Mesh-plot of the fit result. Similarly to the 1D GR tab, the 2D BG BR tab is used to fit the two-dimensional distribution of proximity ratios BG and BR. This distribution is sensitive to all three center distances and associated distribution widths, including, albeit indirectly, \(R_{GR}\) and \(\sigma_{GR}\). Leaving \(R_{GR}\) and \(\sigma_{GR}\) as free fit parameters, however, will usually lead to values inconsistent with the related distribution of proximity ratio GR, since this information is excluded from this fit. It is thus required to fix \(R_{GR}\) and \(\sigma_{GR}\), which can be easily done by right-clicking the fit parameter table and selecting Fix GR Parameters. The amplitudes as estimated from the one-dimensional fit may be left free or fixed in this step. Covariances should be fixed in this step and only be refined after center distances and distribution widths have been reasonably fit. As for the 1D GR tab, the fit is always performed using Monte Carlo simulation of the histogram with the Simplex fit algorithm, regardless of the fit options set in the Fit Table tab. Fitting the full three-dimensional histogram of photon counts¶ Fitting the full three-dimensional histogram of the proximity ratios GR, BG and BR. Left: Colormap-representation of the weighted residuals. Right: Mesh-plot of the fit result. The determined parameters should now reasonably describe the data and the refinement of the fit can be performed first using Monte Carlo simulation of histograms. This step is necessary since the maximum likelihood estimator (MLE) will return zero likelihood when the parameter values are too far from their optimum value. Once a region of high likelihood has been reached, one can switch from the Monte Carlo estimation to the MLE. To check if an area of high posterior density has been reached, enable the Use MLE checkbox and confirm that the reported log-likelihood logL is not -Inf. For the fitting of the full three-dimensional histogram of photon counts, it is advised to iteratively fix and un-fix parameters. Start e.g. by leaving only the center distances as free parameters, after which the distribution width can be fit. Shortcuts for the fixing and un-fixing of groups of parameters are available by right-clicking the fit parameter table. As a final step, un-fix the covariances. Using the Simplex algorithm, covariances will usually not diverge from their initial zero values. Instead, try using the gradient-based or pattern search fit algorithms to fit the covariances. Finally, all parameters should be left free and refined simultaneously. This step can be time consuming, but is essential to ensure that the obtained result is the correct minimum. Options and settings¶ Left: Options related to the fit procedure. Right: Display of the photon count distribution found in the Corrections tab. Fit options¶ A number of options are available regarding the fit routine and model function. To switch between using the likelihood expression or \(\chi^2_{red.}\) estimator, use the Use MLE checkbox. If checked, the likelihood expression is evaluated and used as the objective function of the fit routine. If unchecked, the \(\chi^2_{red.}\) estimator is used based on Monte Carlo simulation of the shot-noise limited histograms. In this case, the Monte Carlo Oversampling parameter in the Corrections tab determines the used oversampling factor. The Fit Method dropdown menu allows to choose between different optimization algorithms. If the 1D GR or 2D BG BR tabs are selected, the Simplex method is always used, regardless of the choice in the Fit Method dropdown menu. The following fit methods can be chosen: - Simplex: - Based on the fminsearch function of MATLAB. To allow to set lower and upper parameter bounds, the fminsearchbnd function from the MATLAB File Exchange is used instead. - Pattern Search: - Based on the patternsearch function of the Global Optimization Toolbox of MATLAB. This function is less likely to be trapped in a local minimum than the Simplex option. - Gradient-based: - Based on the fmincon function of MATLAB. Uses gradient-based optimization algorithm and is thus generally faster to converge than the Simplex algorithm, but poses a higher risk to end in a local minimum. - Gradient-based (global): - Based on the GlobalSearch algorithm of the Global Optimization Toolbox of MATLAB. Similar to the Gradient-based fit algorithm, but performs multiple optimization runs using the fmincon routine using randomized starting points. The options related to the model function are: - # Species: - Specify the number of species of the model function. - Stochastic labeling: - Toggle the use of the stochastic labeling correction. This option includes a second population for every species with permuted distances \(R_{BG}\) and \(R_{BR}\). Use this option if not all fluorophores are attached site-specifically. The current implementation assumes that the blue fluorophore is attached site-specifically, but that the two acceptor fluorophores are stochastically distributed over the two possible label positions. Specify the value for the fraction F of the species matching the distance assignment in the fit table using the edit box. The permuted species then has contribution (1-F). To fix the stochastic labeling fraction, toggle the F? checkbox. If unchecked, the stochastic labeling fraction is optimized by the fit routine as a free parameter, taking the specified value as the initial value. Options that affect the computation time are: - Min N: - Exclude time bins with less photons than the specified value. Inspect the photon count distribution shown at the bottom of the Corrections tab to judge the effect of this selection. - Max N: - Exclude time bins with more photons than the specified value. Inspect the photon count distribution shown at the bottom of the Corrections tab to judge the effect of this selection. - Monte Carlo Oversampling: - Increase this value to reduce statistical noise in the simulated histograms due to the random number generation. The Monte Carlo sampling step will be performed N times, and the final histogram is divided by N. Display options¶ - Live Plot: - If checked, the plots and fit parameter table will update at every fit iteration. If Use MLE is checked, only the fit parameter table is updated since the theoretical histogram is not calculated during the fit process. The Live Plot option will slow down the fit routine due to the time it takes to update the plots. - Plot type: - This option is found in the Corrections tab. Specify the way that two-dimensional histograms are plotted. The Mesh option will plot the data as a surface and show the fit function as a mesh. For the 2D BG BR tab. the weighted residuals are plotted above (see here). The Colormap option will plot only the data in the two-dimensional plots. Weighted residuals are shown using a colormap on the data in a range between -5 and 5, color-coded from red over white to blue. See here to see the effect of this option on the 3D tab. - # Bins for histogram: - Specify the number of bins per dimension used to histogram the data. Since a higher number of bins will result in less counts per bin overall, one should also increase the Monte Carlo Oversampling* parameter when increasing the number of bins to counteract the increased noise due to the reduced sampling. Saving the fit state¶ Saving the current settings and fit parameters using the Save Fit State button will store all information in the .tcpda file. To save different fit states for the same file, right-click the Save Fit State button and select Save Fit State in separate file to store the parameters in a .fitstate file. To load a fit state from an external file, right-click the Save Fit State button and select Load Fit State from separate file. Loaded two-color PDA data sets are also saved upon saving the fit state. Exporting figures¶ To export the fit result to a figure, click the Save Figure button. This will open a new figure window and plot the currently displayed photon count distribution using the colormap display option. Additionally, a second plot will be generated showing the extracted distance distributions and correlation coefficients between the three distance dimensions. Note Before exporting a figure, re-generate the simulated histogram using a high value for the Monte Carlo oversampling parameter to reduce stochastic noise. Using prior information¶ In the Bayesian framework, prior information about the parameters can be included into the analysis by the prior distribution: Here, \(P(\theta|D,I)\) is the posterior probability of the parameter vector \(\theta\) given the data \(D\) and the background information \(I\), which is proportional to the probability of the data given the parameters, \(P(D|\theta)\), multiplied by the prior distribution of the parameters given the background information, \(P(\theta|I)\). In tcPDA, one can assign normal prior distribution for each parameter separately using the fit parameter table. Checking the P checkbox for a parameter enables the prior for it. Specify the mean (Prior \(\mu\)) and width (Prior \(\sigma\)) for the parameter as estimated from the available information. Only normally distributed prior are available, and no correlated information about different parameters is accounted for at the moment. Estimating confidence intervals¶ The Bayesian tab allows to estimate confidence intervals of the fitted parameters using Markov chain Monte Carlo (MCMC) sampling. A random walk is performed over the parameter space. At every step, new parameter values are drawn based on the proposal probability distribution given by a Normal distribution with specified width. The change of the parameter vector is accepted with probability given by the ratio of the likelihoods: By tuning the width of the proposal distribution, one can adjust the acceptance probability of the random walk, which should fall into the range of 0.2-0.5. This method to sample the posterior likelihood is called Metropolis-Hasting sampling (called MH in the Sampling method dropdown menu). Additionally, tcPDA implements Metropolis-within-Gibbs sampling (called MWG in the Sampling method dropdown menu). This sampling method performs a Metropolis-Hasting sampling step for all parameters consecutively. The simultaneous re-sampling of all parameters in MH sampling can lead to low acceptance probabilities. By performing the sampling one parameter at a time, the algorithm is less likely to step into regions of low likelihood. MWG is thus to be preferred to MH when many parameters are estimated simultaneously. However, since the likelihood has to be evaluated at every step for every parameter, it is computationally more expensive. Based on the sampled parameter values, the 95% confidence intervals are determined from the standard deviation \(\sigma\) by \(\mathrm{CI}=1.96\sigma\). To obtain independent samples from the posterior distribution, one can define a spacing between the samples used to determine the confidence intervals. Click the Draw Samples button in the Bayesian tab to begin drawing the number of samples from the posterior distribution as specified in the Number of samples input field. Drawing samples takes the same parameters and options defined in the Fit table and Corrections tabs as performing a normal fit, with the difference that the MLE is always used. Choose the sampling method between MH and MWG using the Sampling method dropdown menu. When the Append samples checkbox is selected, the newly drawn samples will be added to the previous run of the random walk. Otherwise, previous results will be overwritten. The display will update every 100 steps and show the evolution of the log-likelihood and all free parameters. The acceptance ratio is shown in the bottom right and periodically updated. The sampling width can be set separately for the amplitudes, center distances and distribution width, and can be adjusted to tune the acceptance ratio. Upon completion of the sampling, mean parameter values and 95% confidence intervals will be shown to the right of the parameter plots. Confidence intervals are determined from samples with specified spacing in the Spacing between samples field. This value can be adjusted after the sampling has completed, which will update the displayed confidence intervals. Finally, click the Save samples button to store the result of the MCMC sampling in a text file in the folder of the data. Global analysis using two-color data sets¶ tcPDA also implements a global analysis of two- and three-color FRET data sets. This function is useful if the three-color system has previously been studied using two-color FRET experiments with the same labeling positions. Hereby, the fit functions optimizes a global likelihood over all loaded data sets: The underlying distances are the same between the two- and three-color FRET data sets. However, the correction factors and size of the time bin can be different between the experiments. It is even possible to include two-color FRET information measured using different dyes, assuming that the fluorophores are not influencing the properties of the studied molecule. Currently, only two-color PDA data exported from the BurstBrowser module is supported (.pda files). The correction factors and time bin are read from the .pda file, but can be modified in the table as well. Every two-color data set needs to be assigned the corresponding distance in the three-color FRET system (GR, BG or BR). Toggle the use of a specific data set by unchecking the Use checkbox in the table. Loaded two-color data sets are stored when the fit state is saved. The two-color FRET data sets are shown in the Two-color Data Sets tab. Check the Use 2C PDA data (global analysis) to include the two-color information in the fit. The Normalize likelihood for global analysis checkbox will use the geometric mean of the individual likelihoods to assign equal weight to each data set: Consider that using many two-color FRET data sets with this option will reduce the overall contribution of the three-color data set. Note Note that, while information on the center distances, population sizes and distribution widths is shared between two- and three-color data sets, the correlated information is unique to the three-color data set. Thus, the use of the global analysis is expected to increase the robustness of the extracted correlated information. Considerations for fit performance¶ Monte Carlo simulation of histograms¶ The number of bins for the histograms affects this fitting procedure. The Monte Carlo oversampling parameter, found in the Corrections GUI, will reduce noise and thus improve the convergence of the fitting procedure, however the computation time will scale linearly with this parameter. For initial fitting, a value of 1 may be chosen to quickly find the optimal region. Upon refinement, the value should be increased to 10 or higher. The simulated histograms are used to represent the fit result also when the MLE procedure is used. To export a plot of the final histograms, showing the result of the analysis, consider using a high oversampling factor of 100 to obtain theoretical histograms that are not affected by noise. Maximum likelihood estimator¶ The MLE is unaffected by the number of bins used for this histograms or the Monte Carlo oversampling parameter. However, the performance of the likelihood calculation critically depends on the background count numbers. Try to keep the background signal low to improve performance of the algorithm. The performances decrease in presence of high background count rates because the total likelihood is given by the sum over all likelihoods given the possible combinations of background and fluorescence photon counts. Since three channels are present in three-color FRET, the number of likelihood evaluations is thus given by the product of the number of background counts to consider per channel, which quickly grows to significant excess calculations. Consider for example a background count rate of 1 kHz per channel and a time bin size of 1 ms, corresponding to an average background count number of 1. By accounting for the probabilities of 0, 1 and 2 background counts, given by a Poissonian distribution, one accounts for more than 90% of the possible background count occurrences. Still, the trinomial distribution would have to be evaluated \(3*3*3=27\)-times more often (and the binomial distribution 9-times more often) compared to when no background signal were present. CUDA support¶ The likelihood estimator is available for use with NVIDIA GPUs. By default, the program will look for available CUDA-compatible graphics cards and default to the CPU if none are found. For use in MATLAB, the CUDA code is compiled into a MATLAB MEX file and available for the Windows and Linux operating systems. If the supplied MEX file does not work, follow the instructions in the help file found with the source code to re-compile the MEX file on your computer.
https://pam.readthedocs.io/en/latest/tcPDA.html
2019-07-16T02:17:45
CC-MAIN-2019-30
1563195524475.48
[array(['_images/fit_gui.png', '_images/fit_gui.png'], dtype=object) array(['_images/gui_2d.png', '_images/gui_2d.png'], dtype=object) array(['_images/gui_3d.png', '_images/gui_3d.png'], dtype=object) array(['_images/fit_gui.png', '_images/fit_gui.png'], dtype=object)]
pam.readthedocs.io
pywinauto.tests.comboboxdroppedheight¶ ComboBox dropped height Test What is checked It is ensured that the height of the list displayed when the combobox is dropped down is not less than the height of the reference. How is it checked The value for the dropped rectangle can be retrieved from windows. The height of this rectangle is calculated and compared against the reference height. When is a bug reported If the height of the dropped rectangle for the combobox being checked is less than the height of the reference one then a bug is reported. Bug Extra Information There is no extra information associated with this bug type Is Reference dialog needed The reference dialog is necessary for this test. False positive bug reports No false bugs should be reported. If the font of the localised control has a smaller height than the reference then it is possible that the dropped rectangle could be of a different size. Test Identifier The identifier for this test/bug is “ComboBoxDroppedHeight”
https://pywinauto.readthedocs.io/en/latest/code/pywinauto.tests.comboboxdroppedheight.html
2017-04-23T13:43:00
CC-MAIN-2017-17
1492917118707.23
[]
pywinauto.readthedocs.io
Click the on the toolbar until the virtual record is displayed in the centre panel. Hover over the virtual record with the mouse. Click on the virtual record and move forwards and backwards, at any speed, to hear exactly what you would hear if you applied those same movements to real vinyl. Use the left mouse button for normal scratching (non-catch-up mode). Use the right mouse button for catch-up scratching (The record, when released, will catch up to the would-be playing position). Tip: The closer you click to the centre of the record, the greater the amount of momentum applied. Tip: If you want to ascertain the exact deck to scratch, use the Left Shift for Deck A, and Right Shift for Deck B. You can apply added effects to a scratch such as muting and momentum release variations while scratching. For details on how to apply muting click here. For details on momentum release variations click here. Zorphing How to apply muting to a scratch Adjusting release speed Scratch Sampler How to record a scratch sample
http://docs.otslabs.com/OtsAV/help/using_otsav/control_features/scratch_feature/how_to_scratch.htm
2017-04-23T13:53:17
CC-MAIN-2017-17
1492917118707.23
[]
docs.otslabs.com
Settings¶ The following settings are available to customize the Dynamo behavior. - DYNAMO_DELETE_COLUMNS: Flag to define, whether database columns (including content) are deleted after the field has been deleted in the MetaField definition; default: True. - DYNAMO_DELETE_TABLES: Flag to define, whether database tables (including content) are deleted after the model has been deleted in the MetaModel definition; default: True. - DYNAMO_DEFAULT_APP: Default app to be used when model is generated; default: dynamo. - DYNAMO_DEFAULT_MODULE: Default module to be used when model is generated; default: dynamo.models. - DYNAMO_FIELD_TYPES: list of availabe Dynamo field types; default: all Django Field Types (as of Django 1.3). If you want to make any customized fields available, you would need to add them here! - STANDARD_FIELD_TYPES: list of Dynamo standard field types; default: all Django Field Types (as of Django 1.3) except for relationship ones. - INTEGER_FIELD_TYPES: list of integer field types, that control how the choices tuple is generated; default: all Django integer field types. - STRING_FIELD_TYPES: list of string field types, that define how the “require” field controls the blank and null option; default:all Django string field types. - RELATION_FIELD_TYPES: list of field types that require an entry in related_model; default: all Django relation field types.
http://django-dynamo.readthedocs.io/en/latest/settings.html
2017-07-20T22:34:14
CC-MAIN-2017-30
1500549423512.93
[]
django-dynamo.readthedocs.io
Introduction to Stored Procedures Using the Telerik Platform Data Connectors you can access stored procedures (functions, reports) kept in your SQL database or Salesforce application. More specifically, you can map an HTTP endpoint to a stored procedure that you can later use to execute the code. This allows you to integrate your client app with external systems and multiple data providers, to generate reports, and to perform other data management tasks. You have two options for working with stored procedures: using the Telerik Platform web portal or using the Backend Services RESTful API. Prerequisites Working with stored procedures relies on the Telerik Platform Data Connectors feature. You need to have a Data Link Server installed (SQL Data Connectors only) as well as a Data Connector configured to connect to your data. See Introduction to Data Connectors for more information. Support for stored procedures is only available in the latest release of the Java-based Data Link Server. Follow the upgrade instructions for Linux or Windows to install it. If you are running the .NET version, consider migrating to the Java version. Features Each HTTP endpoint that you create for a stored procedure allows you to use: - Unlimited amount of input, output and input/output parameters (not applicable to Salesforce) - Any type of input, output and input/output parameters (not applicable to Salesforce) - Role-based permissions to control who can execute stored procedures - JSON-formatted result structure Next Steps See Also - Mapping Stored Procedures - Executing Stored Procedures - Reading Stored Procedure Mappings - Getting the Stored Procedure Endpoints Count - Updating Stored Procedure Mappings - Deleting Stored Procedure Mappings - Security in Stored Procedures - Limitations of Stored Procedures - Stored Procedures Fields and Values Reference
http://docs.telerik.com/platform/backend-services/dotnet/server-side-logic/stored-procedures/introduction
2017-07-20T22:30:22
CC-MAIN-2017-30
1500549423512.93
[]
docs.telerik.com
Using Auditing¶ Substance D keeps an audit log of all meaningful operations performed against content if you have an audit database configured. At the time of this writing, "meaningful" is defined as: - When an ACL is changed. - When a resource is added or removed. - When a resource is modified. The audit log is of a fixed size (currently 1,000 items). When the audit log fills up, the oldest audit event is thrown away. Currently we don't have an archiving mechanism in place to keep around the items popped off the end of the log when it fills up; this is planned. You can extend the auditing system by using the substanced.audit.AuditLog, writing your own events to the log. Configuring the Audit Database¶ In order to enable auditing, you have to add an audit database to your Substance D configuration. This means adding a key to your application's section in the .ini file associated with the app: zodbconn.uri.audit = <some ZODB uri> An example of "some ZODB URI" above might be (for a FileStorage database, if your application doesn't use multiple processes): zodbconn.uri.audit = Or if your application uses multiple processes, use a ZEO URL. The database cannot be your main database. The reason that the audit database must live in a separate ZODB database is that we don't want undo operations to undo the audit log data. Note that if you do not configure an audit database, real-time SDI features such as your folder contents views updating without a manual refresh will not work. Once you've configured the audit database, you need to add an audit log object to the new database. You can do this using pshell: [chrism@thinko sdnet]$ bin/pshell etc/development.ini Python 3.3.2 (default, Jun 1 2013, 04:46:52) [GCC 4.6.3] on linux Type "help" for more information. Environment: app The WSGI application. registry Active Pyramid registry. request Active request object. root Root of the default resource tree. root_factory Default root factory used to create `root`. >>> from substanced.audit import set_auditlog >>> set_auditlog(root) >>> import transaction; transaction.commit() Once you've done this, the "Auditing" tab of the root object in the SDI should no longer indicate that auditing is not configured. Viewing the Audit Log¶ The root object will have a tab named "Auditing". You can view the currently active audit log entries from this page. Accessing this tab requires the sdi.view-auditlog permission. Adding an Audit Log Entry¶ Here's an example of adding an audit log entry of type NailsFiled to the audit log: from substanced.util import get_oid, get_auditlog def myview(context, request): auditlog = get_auditlog(context) auditlog.add('NailsFiled', get_oid(context), type='fingernails') ... Warning If you don't have an audit database defined, the get_auditlog() API will return None. This will add a``NailsFiled`` event with the payload {'type':'fingernails'} to the audit log. The payload will also automatically include a UNIX timestamp as the key time. The first argument is the audit log typename. Audit entries of the same kind should share the same type name. It should be a string. The second argument is the oid of the content object which this event is related to. It may be None indicating that the event is global, and unrelated to any particular piece of content. You can pass any number of keyword arguments to substanced.audit.AuditLog.add(), each will be added to the payload. Each value supplied as a keyword argument must be JSON-serializable. If one is not, you will receive an error when you attempt to add the event. Using The auditstream-sse View¶ If you have auditing enabled, you can use a view named auditstream-sse against any resource in your resource tree using JavaScript. It will return an event stream suitable for driving an HTML5 EventSource (an HTML 5 feature, see for more information). The event stream will contain auditing events. This can be used for progressive enhancement of your application's UI. Substance D's SDI uses it for that purpose. For example, when an object's ACL is changed, a user looking at the "Security" tab of that object in the SDI will see the change immediately, rather than upon the next page refresh. Obtain events for the context of the view only: var source = new EventSource( "${request.sdiapi.mgmt_path(context, 'auditstream-sse')}"); Obtain events for a single OID unrelated to the context: var source = new EventSource( "${request.sdiapi.mgmt_path(context, 'auditstream-sse', query={'oid':'12345'})}"); Obtain events for a set of OIDs: var source = new EventSource( "${request.sdiapi.mgmt_path(context, 'auditstream-sse', query={'oid':['12345', '56789']})}"); Obtain all events for all oids: var source = new EventSource( "${request.sdiapi.mgmt_path(context, 'auditstream-sse', query={'all':'1'})}"); The executing user will need to possess the sdi.view-auditstream permission against the context on which the view is invoked. Each event payload will contain detailed information about the audit event as a string which represents a JSON dictionary. See the acl.pt template in the substanced/sdi/views/templates directory of Substance D to see a "real-world" usage of this feature.
http://substanced.readthedocs.io/en/latest/audit.html
2017-07-20T22:48:49
CC-MAIN-2017-30
1500549423512.93
[]
substanced.readthedocs.io
Using AWS CloudWatch in Grafana Grafana ships with built in support for CloudWatch. You just have to add it as a data source and you will be ready to build dashboards for you CloudWatch metrics. Adding the data source to Grafana - Open the side menu by clicking the Grafana icon in the top header. - In the side menu under the Dashboardslink you should find a link named Data Sources. - Click the + Add data sourcebutton in the top header. Cloudwatchfrom the Type dropdown. NOTE: If at any moment you have issues with getting this datasource to work and Grafana is giving you undescriptive errors then don’t forget to check your log file (try looking in /var/log/grafana/grafana.log). Authentication IAM Roles Currently all access to CloudWatch is done server side by the Grafana backend using the official AWS SDK. If you grafana server is running on AWS you can use IAM Roles and authentication will be handled automatically. AWS credentials file Create a file at ~/.aws/credentials. That is the HOME path for user running grafana-server. > NOTE: If you think you have the credentials file in the right place but it is still not working then you might try moving your .aws file to ‘/usr/share/grafana/’ and make sure your credentials file has at most 0644 permissions. Example content: [default] aws_access_key_id = asdsadasdasdasd aws_secret_access_key = dasdasdsadasdasdasdsa region = us-west-2 Metric Query Editor You need to specify a namespace, metric, at least one stat, and at least one dimension. Templated CloudWatch Datasource Plugin provides the following queries you can specify in the Query field in the Variable edit view. They allow you to fill a variable’s options list with things like region, namespaces, metric names and dimension keys/values. For details about the metrics CloudWatch provides, please refer to the CloudWatch documentation. Examples templated Queries Example dimension queries which will return list of resources for individual AWS Services: ec2_instance_attribute JSON filters The ec2_instance_attribute query take filters in JSON format. You can specify pre-defined filters of ec2:DescribeInstances. Filters syntax: { filter_name1: [ filter_value1 ], filter_name2: [ filter_value2 ] } Example ec2_instance_attribute() query ec2_instance_attribute(us-east-1, InstanceId, { "tag:Environment": [ "production" ] }) Cost Amazon provides 1 million CloudWatch API requests each month at no additional charge. Past this, it costs $0.01 per 1,000 GetMetricStatistics or ListMetrics requests. For each query Grafana will issue a GetMetricStatistics request and every time you pick a dimension in the query editor Grafana will issue a ListMetrics request.
http://docs.grafana.org/features/datasources/cloudwatch/
2017-07-20T22:27:26
CC-MAIN-2017-30
1500549423512.93
[array(['http://docs.grafana.org/img/docs/v43/cloudwatch_editor.png', None], dtype=object) ]
docs.grafana.org
Bellow is step by step tutorial for integrating TranslationExchange into your Unbounce lander pages. If you did not have your project setup, please sign up here. Step 1: Login to your Unbounce dashboard, and select pages for integration, click on edit page. Step 2: On Unbounce Editor page, click on "Javascript" on the bottom section of the page, to insert custom JS code snippet. On popup window select placement as "Head" and you can change Name of custom Javascript. Step 3: Inside script window please enter following code snippet: Unbounce lander page is now complete please follow our simple few steps tutorial on how to complete Translations for your page and how to publish it for your users: Trasnlations Process
http://docs.translationexchange.com/integrations-unbounce/
2017-07-20T22:39:06
CC-MAIN-2017-30
1500549423512.93
[array(['https://trex-docs.s3.amazonaws.com/2016/Jan/dashboard-1452277767690.png', None], dtype=object) array(['https://trex-docs.s3.amazonaws.com/2016/Jan/settings-1452277779917.png', None], dtype=object) array(['https://trex-docs.s3.amazonaws.com/2016/Jan/code-1452277800490.png', None], dtype=object) ]
docs.translationexchange.com
Author: Department of Health Date: 18th June 2013 Size: 2 pages (220kB) Download: Progress sheet - Workforce Feb11 - edited.doc has been archived and is no longer available. What Victoria is doing to increase workforce participation of Aboriginal people and work towards improving access to, and the experience of, the workforce for Aboriginal employees
http://docs2.health.vic.gov.au/docs/doc/Aboriginal-health-progress-sheet---Workforce
2017-07-20T22:28:45
CC-MAIN-2017-30
1500549423512.93
[]
docs2.health.vic.gov.au
Defines the game matching expression to be used to filters players.Namespace: Sfs2X.Requests.Game Assembly: SmartFox2X (in SmartFox2X.dll) Version: 1.7.3.0 (1.7.3) Syntax Property ValueType: MatchExpression Remarks Filtering is applied when: Filtering is not applied to users invited by the creator to join a private game (see the InvitedPlayers property). The default value is null. - users try to join a public Game Room as players (their User Variables must match the matching criteria); - the server selects additional users to be invited to join a private game (see the SearchableRooms property).
http://docs2x.smartfoxserver.com/api-docs/csharp-doc/html/87e5dacb-b665-57d3-b11c-722d20a1708d.htm
2017-07-20T22:35:35
CC-MAIN-2017-30
1500549423512.93
[]
docs2x.smartfoxserver.com
Bundled plugins¶ Elgg comes with a set of plugins. These provide the basic functionality for your social network. The following plugins are also bundled with Elgg, but are not (yet) documented - aalborg_theme - bookmarks - ckeditor - custom_index - developers - embed - externalpages - garbagecollector - htmlawed - invitefriends - legacy_urls - likes - logbrowser - logrotate - notifications - reportedcontent - search - site_notifications - tagcloud - twitter_api - uservalidationbyemail - web_services
http://elgg.readthedocs.io/en/stable/plugins/index.html
2017-07-20T22:32:15
CC-MAIN-2017-30
1500549423512.93
[]
elgg.readthedocs.io
Help Center Local Navigation Recording audio You can record audio in a BlackBerry® device application by using the javax.microedition.media.Player class and the associated RecordControl interface. The recording is saved to a file in built-in media storage, media card storage, or to a stream. Next topic: Record audio in a BlackBerry device application Previous topic: Code sample: Playing a sequence of tones Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/developers/deliverables/17968/Recording_audio_1228592_11.jsp
2014-12-18T02:39:15
CC-MAIN-2014-52
1418802765584.21
[]
docs.blackberry.com
ExcelApplication Used to automate Microsoft Excel. Supported on the Windows platform only. Notes VBA help files. On Windows Office 2003, Office). Examples The following example transfers the information in a ListBox to Excel and tells Excel to compute and format a column total. The code is in a PushButton's Action event handler and it assumes that a two-column ListBox, ListBox1, is in the window. It contains the following data. Dim book As ExcelWorkbook Dim sheet As ExcelWorksheet excel.Visible = True book = excel.Workbooks.Add excel.ActiveSheet.Name = "Expenses Report" For i As Integer = 0 To ProduceList.listcount - 1 excel.Range("A" + Str(i + 1), "A" + Str(i + 1)).Value = ProduceList.Cell(i, 0) excel.Range("B" + Str(i + 1), "B" + Str(i + 1)).Value = ProduceList.Cell(i, 1) excel.Range("A"+Str(ProduceList.listcount+1), "A"+ _ Str(ProduceList.listcount+1)).Value = "Total" excel.Range("B1", "B"+Str(ProduceList.listcount)).Style = "Currency" excel.Range("B"+Str(ProduceList.listcount+1), "B"+ _ Str(ProduceList.listcount+1)).Value = "=SUM(B1:B" +_ Str(ProduceList.listcount) + ")" Exception err as OLEException MsgBox err.message See Also Office Automation, Office, OLEException, OLEObject, PowerPointApplication, WordApplication classes.
http://docs.xojo.com/index.php/ExcelApplication
2014-12-18T02:18:02
CC-MAIN-2014-52
1418802765584.21
[]
docs.xojo.com
The Qpid broker is supported for Python 2.x environments. The Qpid transport includes full SSL support within Kombu. See the kombu.transport.qpid docs for more info. Contributed by Brian Bouterse and Chris Duryee through support from Red Hat. Dependencies: extra[librabbitmq] now requires librabbitmq 1.6.0 Docstrings for TokenBucket did argument to Producer (Issue #423). Django: Fixed app_label for older Django versions (< 1.7). (Issue #414). Django: Fixed bug in the Django 1.7 compatibility improvements related to autocommit handling. Contributed by Radek Czajka. Django: The Django transport models would not be created on syncdb after app label rename (Issue #406). kombu.async: Min. delay between waiting for timer was always increased to one second. Fixed bug in itermessages where message is received after the with statement exits the block. Fixed by Rumyana Neykova (_). Fixed remaining bug in maybe_declare for auto_delete exchanges. Fix contributed by Roger Hu. MongoDB: Creating a channel now properly evaluates a connection (Issue #363). Fix contributed by Len Buckens. Reverts change in 3.0.17 where maybe_declare caches the declaration of auto_delete queues and exchanges. Fix contributed by Roger Hu. Redis: Fixed race condition when using gevent and the channel is closed. Fix contributed by Andrew Rodionoff.() kombu[librabbitmq] now depends on librabbitmq 1.5.1. Redis: Fixes TypeError problem. Now depends on amqp 1_global flag appropriately:def update_prefetch_count(channel, new_value): channel.basic_qos( 0, new_value, not channel.connection.client.qos_behavior_matches_spec, ) Users of librabbitmq. MongoDB: Now endures a connection failover (Issue #123). Fix contributed by Alex Koshelev. MongoDB: Fixed KeyError when from attempting to close a non-existing connection (Issue #320). 1_patterns transport option:>>>. Now depends on amqp 1. Now depends on.. Now depends on amqp 1.4.1. maybe_declare now raises a “recoverable connection error” if the channel is disconnected instead of a ChannelError so that the operation can be retried. Redis: Consumer.cancel() is now thread safe. This fixes an issue when using gevent/eventlet and a message is handled after the consumer is cancelled. Now depends on amqp 1.4.0. Redis: Basic cancel for fanout based queues now sends a corresponding UNSUBSCRIBE command to the server. This fixes an issue with pidbox where reply messages could be received after the consumer was cancelled,: consume now.close now sets .poller to None. Serializer: loads and dumps now wraps exceptions raised into DecodeError and kombu.exceptions.EncodeError respectively. and OSError are now treated as recoverable connection errors. SQS: Improved performance by reading messages in bulk. Contributed by Matt Wise. Connection Pool: Attempting to acquire from a closed pool will now raise RuntimeError. is now a named tuple. Now depends on amqp 1. SQS: Properly reverted patch that caused delays between messages. Contributed by James Saryerwinnie select: Clear all registerd fds on poller.cloe Eventloop: unregister if EBADF raised. Now depends on amqp version if eventlet/gevent used. Pidbox: Fixes problem where expires header was None, which is a value not supported by the amq protocol. ConsumerMixin: New consumer_context method for starting the consumer without draining events. Now depends on amqp version 1.3. No longer supports Python 2.5 The minimum Python version supported is now Python 2.6.0 for Python2, and Python 3.3 for Python3. argument was first supported for consumers in version 2.5.10, and first supported by Queue.get. and amqp.ConnectionError and amqp.ChannelError is used instead. Message object implementation has moved to kombu.message.Message. Serailization: Renamed functions encode/decode to dumps() and loads(). For backward compatibility the old names are still available as aliases. The kombu.log.anon_logger function has been removed. Use get_logger() instead. queue_declare now returns namedtuple with queue, message_count, and consumer_count fields. LamportClock: Can now set lock class kombu.utils.clock: Utilities for ordering events added. SimpleQueue now allows you to override the exchange type used. Contributed by Vince Gonzales. Zookeeper transport updated to support new changes in the kazoo library. Contributed by Mahendra M. to the underlying connection (Issue #214). Transports may now distinguish between recoverable and irrecoverable connection and channel errors. kombu.utils.Finalize has been removed: Use multiprocessing.util.Finalize instead. and message_tablename transport options. Contributed by Ryan Petrello.. Now depends on amqp 1. Now depends on amqp 1.0.12 (Py3 compatibility issues). MongoDB: Removed cause of a “database name in URI is being ignored” warning. Fix by Flavio Percoco Premoli Adds passive option if). Kombu 3 consumers will no longer accept pickle/yaml or msgpack by default, and you will have to explicitly enable untrusted deserializers either globally using kombu.enable_insecure_serializers(), or using the accept argument to Consumer. New utility function to disable/enable untrusted serializers. Consumer: accept can now be used to specify a whitelist of content types to accept. If the accept whitelist is set and a message is received with a content type that is not in the whitelist then a ContentDisallowed exception is raised. Note that this error can be handled by the already existing on_decode_error callback Examples:Consumer(accept=['application/json']) Consumer(accept=['pickle', 'json']) Now depends on amqp 1.0.11 pidbox: Mailbox now supports the accept argument. Redis: More friendly error for when keys are missing. Connection URLs: The parser did not work well when there were multiple ‘+’ tokens. and driver_name attributes. Fix contributed by Mher Movsisyan. Fixed bug with kombu.utils.retry_over_time when no errback specified. Now depends on amqp 1 boto vno being available. Fix contributed by Ephemer.lling. Fixed problem with connection clone and multiple URLs (Issue #182). Fix contributed by Dane Guempel. zeromq: Now compatible with libzmq 3.2.x. Fix contributed by Andrey Antukh. Fixed Python 3 installation problem (Issue #187). can that defines how ensure_connection()/ ensure()/kombu.Connection.autoretry() will reconnect in the event of connection failures. The default reconnection strategy is round-robin, which will simply cycle through the list forever, and there’s also a shuffle strategy. Queue now class used as a thread-safe way to manage changes to a consumer or channels prefetch_count. This was previously an internal class used in Celery now moved to the kombu.common.ignore_errors() ignores connection and channel errors. Must only be used for cleanup actions at shutdown or on connection loss. Support for exchange-to-exchange bindings. The Exchange entity gained bind_to and unbind_from methods. no longer tries to call the non-existent Producer._close. librabbitmq: Now implements transport.verify_connection so that connection pools will not give back connections that are no longer working. New and better repr() for Queue and Exchange objects. Python3: Fixed problem with running the unit test suite. Python3: Fixed problem with JSON codec. Redis: Improved fair queue cycle implementation (Issue #166). Contributed by Kevin McCarthy. Redis: Unacked message restore limit is now unlimited by default. Also, the limit can now be configured using the unacked_restore_limit transport transport Adds additional compatibility dependencies: - Python <= 2.6: - importlib - ordereddict - Python <= 2.5 - simplejson. New experimental PICKLE_PROTOCOL environment variable. Adds Transport.supports_ev attribute. Pika: Queue purge was not working properly. Fix contributed by Steeve Morin. Pika backend was no longer working since Kombu 2.3 Fix contributed by Steeve Morin. librabbitmq: Can now handle messages that does not have a content_encoding/content_type set (Issue #149). Fix contributed by C Anthony Risinger. Beanstalk: Now uses localhost by default if the URL does not contain a host.p library:$ is not installed, and librabbitmq will also be updated to support the same features. Connection now supports heartbeat argument. If enabled you must make sure to manually maintain heartbeats by calling the Connection.heartbeat_check has been added for the ability to inspect if a transport supports heartbeats or not. Calling heartbeat_check. Pidbox: Now sets queue expire at 10 seconds for reply queues. EventIO: Now ignores ValueError raised by epoll unregister. MongoDB: Fixes Issue #142 Fix by Flavio Percoco Premoli flag set. New experimental filesystem transport. Contributed by Bobby Beever. Virtual Transports: Now support anonymous queues and exchanges.. kombu.common.eventloop(), kombu.utils.uuid(), and kombu.utils.url.parse_url() can now be imported from the kombu module directly. Pidbox transport callback after_reply_message_received now happens in a finally block. Trying to use the librabbitmq:// transport will now show the right name in the ImportError if librabbitmq is not installed. The librabbitmq falls back to the older pylibrabbitmq name for compatibility reasons and would therefore show No module named pylibrabbitmq instead of librabbitmq. Now depends on anyjson 0.3.3 Json serializer: Now passes buffer objects directly, since this is supported in the latest anyjson version. Fixes blocking epoll call if timeout was set to 0. Fix contributed by John Watson. setup.py now takes requirements from the requirements/ directory. The distribution directory contrib/ is now renamed to extra/ SQS: Default visibility timeout is now 30 minutes. Since we have ack emulation the visibility timeout is only in effect if the consumer is abrubtly terminated. retry argument to Producer.publish nowBuffer can now be bound to connections (which will use the default channel). Connection.manager.get_bindings now variable must be set to the target RabbitMQ virtual host, and the URL must be the AMQP URL to the server. The amqp transport alias will now use librabbitmq if. eventio: Now ignores ENOENT raised by epoll.register, and EEXIST from epoll.unregister. eventio: kqueue now ignores KeyError on unregister. Redis: Message.reject now supports the requeue argument. now if there is more data to read. This is to support eventloops where other things must be handled between draining events. Bound Exchange/Queue’s are now pickleable. Consumer/Producer can now be instantiated without a channel, and only later bound using .revive(channel). ProducerPool now takes Producer argument.. The url parser removed more than the first leading slash (Issue #121). SQLAlchemy: Can now specify url using + separator Example. MongoDB: Now supports fanout (broadcast) (Issue #98). Contributed by Scott Lyons. amqplib: Now detects broken connections by using MSG_PEEK. pylibrabbitmq: Now supports basic_get (Issue #97). gevent: Now always uses the select polling_INTERVAL setting). Adds convenience function: kombu.common.eventloop(). No longer supports Python 2.4. Users of Python 2.4 can still use the 1.x series. The 1.x series has entered bugfix-only maintenance mode, and will stay that way as long as there is demand, and a willingness to maintain it. an URL is now part of Kombu core. This change requires no code changes given that the sqlalchemy transport alias is used. kombu.mixins.ConsumerMixin is now supports automatic retry. Producer.publish now supports a declare keyword argument. This is a list of entities (Exchange, or Queue) that should be declared before the message is published. Fixes issue with kombu.compat introduced_queue transport option:>>> x = Connection('redis://', ... transport_options={'deadletter_queue': 'ae.undeliver'}) In addition, angements Adds module kombu.mixins. This module contains a ConsumerMixin class that can be used to easily implement a message consumer thread that consumes messages from one or more kombu.Consumer instances. attribute that can be used to check if the connection instance has established a connection. ConnectionPool.acquire_channel now now contains an implementation of Lamports logical clock. Last release broke after fork for pool reinitialization. Producer/Consumer now has a connection attribute, giving access to the Connection of the instance. Pika: Channels now have access to the underlying Connection instance using channel.connection.client. This was previously required by the Simple classes and is now also required by Consumer and Producer. Connection.default_channel is now closed at object revival. Adds kombu.clocks.LamportClock. compat.entry_to_queue has been moved to new module kombu.common. Broker connection info can be now be specified using URLs The broker hostname can now be given as can now be used as a context manager. Producer.__exit__ now properly calls release instead is now an alias to the amqplib transport. kombu.syn.detect_environment now returns ‘default’, ‘eventlet’, or ‘gevent’ depending on what monkey patches have been installed. Serialization registry has new attribute type_to_name so it is possible to lookup serializater name by content type. Exchange argument to Producer.publish can now be an Exchange instance. compat.Publisher now supports the channel keyword argument. Acking a message on some transports could lead to. transport option. amqplib: Now uses localhost as default hostname instead of raising an error. Redis transport: Now requires redis-py version 2.4.4 or later. New Amazon SQS transport added. Usage:>>> conn = Connection(transport='SQS', ... userid=aws_access_key_id, ... password=aws_secret_access_key) The environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are also supported. librabbitmq transport: Fixes default credentials support. amqplib transport: Now supports login_method for SSL auth. Connection now supports the login_method keyword argument. Default login_method is AMQPLAIN. Redis: Consuming from multiple connections now works with Eventlet. Redis: Can now perform channel operations while the channel is in BRPOP/LISTEN mode (Issue #35). Also the async BRPOP now times out after 1 second, this means that cancelling (Connection.connection_errors). amqplib: Now converts SSLError timeout errors to socket.timeout () Ensures cyclic references are destroyed when the connection is. attribute. and password arguments to Connection (Issue #30). Connection: Default autentication credentials are now delegated to the individual transports. This means that the userid and password arguments to Connection is no longer guest/guest by default. The amqplib and pika transports will still have the default credentials. Consumer.__exit__() did not have the correct signature (Issue #32). Channel objects now have a channel_id attribute. mongod (Issue #29). log messages for connection related actions. KOMBU_LOG_DEBUG will also enable KOMBU_LOG_CONNECTION. works properly with Redis. consumer_tag argument to Queue.consume can’t be None (Issue #21). A None value is now automatically converted to empty string. An empty string will make the server generate a unique tag. Connection now supports a transport_options argument. This can be used to pass additional arguments to transports. Pika: drain_events raised socket.timeout even if no timeout set (Issue #8). The delivery_mode aliases (persistent/transient) were not automatically converted to integer, and would cause a crash if using the amqplib transport. Redis: The redis-py InvalidData exception suddenly changed name to DataError. The KOMBU_LOG_DEBUG environment variable can now be set to log all channel method calls. Support for the following environment variables have been added: - KOMBU_LOG_CHANNEL will wrap channels in an object that logs every method call. - KOMBU_LOG_DEBUG both command only available in MongoDB 1.3+, so now raises an exception if connected to an incompatible server version. Virtual Transports: basic.cancel should not try to remove unknown consumer tag. Added Transport.polling_interval Used by django-kombu to increase the time to sleep between SELECTs when there are no messages in the queue. Users of django-kombu should upgrade to django-kombu v0.9.2.
http://kombu.readthedocs.org/en/latest/changelog.html
2014-12-18T02:18:07
CC-MAIN-2014-52
1418802765584.21
[]
kombu.readthedocs.org
changes.mady.by.user Rolly Noel Saved on Jan 04, 2012 ... from C# or Java. You can also use with arrayslists: for i in [300, 100, 23, 1, 55]: print(i) itens = [2, 44, 56, 123, 98, 77, 1000] for i in itens: print (i) or ,arrays: for i in (1, 4, 98, 399, 1000, 34, 199): print (i) Powered by a free Atlassian Confluence Open Source Project License granted to Codehaus. Evaluate Confluence today.
http://docs.codehaus.org/pages/diffpages.action?originalId=228177651&pageId=233046733
2014-03-07T13:53:11
CC-MAIN-2014-10
1393999642530
[]
docs.codehaus.org